You are on page 1of 24

Engineering with Computers

https://doi.org/10.1007/s00366-019-00795-0

ORIGINAL ARTICLE

Enhanced leadership‑inspired grey wolf optimizer for global


optimization problems
Shubham Gupta1 · Kusum Deep1

Received: 26 January 2019 / Accepted: 5 June 2019


© Springer-Verlag London Ltd., part of Springer Nature 2019

Abstract
Grey wolf optimizer (GWO) is a recently developed population-based algorithm in the area of nature-inspired optimization.
The leading hunters in GWO are responsible for exploring the new promising regions of the search space. However, in some
circumstances, the classical GWO suffers from the problem of premature convergence due to the stagnation at sub-optimal
solutions. The insufficient guidance of search in GWO leads to slow convergence. Therefore, to alleviate from all the above
issues, an improved leadership-based GWO called GLF–GWO is introduced in the present paper. In GLF–GWO, the leaders
are updated through Levy-flight search mechanism. The proposed GLF–GWO algorithm enhances the search efficiency of
leading hunters in GWO and provides better guidance to accelerate the search process of GWO. In the GLF–GWO algo-
rithm, the greedy selection is introduced to avoid their divergence from discovered promising areas of the search space. To
validate the efficiency of the GLF–GWO, the standard benchmark suite IEEE CEC 2014 and IEEE CEC 2006 are taken. The
proposed GLF–GWO algorithm is also employed to solve some real-engineering problems. Experimental results reveal that
the proposed GLF–GWO algorithms significantly improve the performance of the classical version of GWO.

Keywords  Numerical optimization · Swarm intelligence · No free lunch theorem · Levy-flight search

1 Introduction the simulation of nature’s behavior. Some popular nature-


inspired techniques based on swarm intelligence are—par-
Swarm Intelligence has become an interesting and emerg- ticle swarm optimization (PSO) [1], ant colony optimization
ing field in the area of numerical optimization that has been (ACO) [2], artificial bee colony (ABC) algorithm [3], ant
used widely to solve many real-world problems. Swarm lion optimizer (ALO) [4], grey wolf optimizer (GWO) [5],
intelligence is based on the collaborative behavior of vari- and so on. These techniques have shown the great potential
ous species such as ants, whales, bees, wolves, and many of swarm intelligence while solving the real-world optimiza-
others. Swarm intelligence-based algorithms start with a tion problems.
randomly generated population and iteratively utilize the In the literature, a large number of algorithms are pro-
social learning ability of creatures to find a global solution posed by observing the natural activities. This fact can be
of the optimization problem. There are many optimization answered with the no free lunch (NFL) theorem [6]. The
techniques available in the literature based on the intelligent NFL states that it impossible to design a single optimization
and collective behavior of creatures and used when the con- algorithm which is reasonable for all optimization problems.
ventional optimization techniques fail to solve an optimiza- In other words, on an average, all optimization algorithms
tion problem. These techniques are also known as nature- perform equal on set of all optimization problems.
inspired optimization techniques, as they are designed for In the present paper, the grey wolf optimizer (GWO)
which is introduced by Mirjalili et al. [5] and is a recent
* Shubham Gupta addition to the field of swarm intelligence is selected for
sgupta@ma.iitr.ac.in our study. The reason for selecting GWO for the study is its
Kusum Deep specific and leadership hierarchy-based search behavior. The
kusumfma@iitr.ac.in three different leading search agents (known as alpha, beta,
and delta grey wolves) are used in GWO to lead and guide
1
Department of Mathematics, Indian Institute of Technology the search process. These leaders enhance the exploration
Roorkee, Uttarakhand 247667, India

13
Vol.:(0123456789)
Engineering with Computers

ability of the search agents towards the promising regions the bridging mechanism based on chaotic sequence. In Ref.
of the search space. The literature [5] also shows that the [29], an improved GWO called augmented grey wolf optimizer
GWO has better potential than some other state-of-the-art is proposed to employ on grid-connected wind power plants.
algorithms such as PSO, differential evolution (DE), evolu- In Ref. [30], opposition-based learning is integrated in GWO
tion strategy (ES), and gravitational search algorithm (GSA) to reduce the problem of stagnation at local optima.
in dealing with various optimization problems. From the Although there are many attempted have been done to
time of development of GWO, it has been directly applied improve the performance of classical GWO, still in some
successfully to various application problems. For adjusting cases, due to insufficient diversity of solutions, GWO faces
the parameters of PID controller in DC motors, Madadi and the problem of stagnation in local optima. Therefore, in the
Motlagh have used GWO [7]. The performance of GWO present paper, the leadership efficiency of leading hunters
is investigated to train multilayer perceptrons [8] and for has been tried to improve based on Levy-flight local search
the analysis of surface waves [9]. In the literature, GWO by proposing the algorithm called GLF–GWO. In this paper,
was successfully applied on optimal reactive power dispatch it is argued that for updating the leaders, why they use them-
problem [10]. In Ref. [11], load frequency control (LFC) selves and wolves having low fitness. Deriving inspiration
problem is solved using classical GWO. In Ref. [12], GWO from this question, a new mechanism based on Levy-flight
has been employed for unmanned combat aerial vehicle local search is introduced for leading hunters. The Levy-
(UCAV) path planning problem. The economic load dis- flight search strategy locally explores the promising regions
patch problems are also solved using GWO [13]. to find more knowledgeable leaders in terms of fitness and
The literature shows that GWO has advantages over various feasibility. The Levy-flight strategy is more useful when
optimization techniques because of its simplicity, easy imple- the pack trapped in local solutions. To maintain the balance
mentation, and ability of avoiding local optima. Although it between the exploration and exploitation, greedy selection
has been suggested [5] based on the performance of classical is applied in the proposed algorithm at the end of search.
GWO that it has better potential as compared to other well- Although in Ref. [24], Levy-flight search is already intro-
known meta-heuristics such as PSO, DE [14], and GSA [15], duced, but in a different way, in Ref. [24], the Levy-flight
in some cases, classical GWO faces the issues of slow conver- local search is utilized to modify the search equation of
gence rate and the problem of stagnation at local optima. To GWO and the contribution of delta wolves is ignored, and
overcome these issues, various attempts have been done and therefore, the new proposed algorithm has changed the origi-
implemented on real-life application problems. For example, nal structure of GWO. While in the present paper, the Levy-
in [16], an improved GWO (IGWO) has been employed to flight search strategy is introduced for the leading hunters
train q-Gaussian radial basis functional link-nets neural net- (guiding wolves) only to explore the new promising domains
work. To solve optimal power flow problem, GWO has been of a search space and the omega wolves are updated based
used in Ref. [17]. In Ref. [18], GWO has been merged with on the guiding directions provided by the leading hunters,
mutation and crossover to solve the economic load dispatch and therefore, the original structure of the algorithm has
problem. In Ref. [19], GWO has been hybridized with hierar- been kept same. To examine and validate the performance
chical operator to present different modified variants of GWO. of the proposed algorithm, standard and complex benchmark
In Ref. [20], grouped GWO has been designed for maximum problem sets of unconstrained and constraint benchmark
power point tracking of doubly fed induction generator. The problems (CEC 2014 and CEC 2006) have been considered.
GWO has been hybridized with Genetic Algorithm (GA) to The rest of this paper is structured as follows: Sect. 2
minimize the potential energy of molecules [21]. For multi- provides a brief overview of classical GWO. In Sect. 3, the
criterion optimization, the multi-objective GWO is developed proposed leadership-inspired GWO called GLF–GWO is
in Ref. [22]. For a dynamic welding scheduling problem, discussed in detail. Section 4 presents the numerical experi-
hybrid multi-objective GWO has been proposed in Ref. [23]. mentation and discussion on two benchmark suits namely
In Ref. [24], the Levy-flight strategy is introduced to modify IEEE CEC2014 and IEEE CEC2006. In Sect. 5, the pro-
the search equations. This proposed algorithm was named posed algorithm is employed on some engineering applica-
LGWO. In LGWO, the delta wolves are ignored for the pro- tions. The conclusions of the work are presented in Sect. 6.
gression of search space. In Ref. [25], multidirectional search
has been introduced in GWO to solve mixed-integer optimiza-
tion problems. In Ref. [26], multi-strategy ensemble GWO has 2 An overview of classical grey wolf
been proposed which utilized three different search strategies optimizer
to update the wolves. This improved GWO has been used for
feature selection problems. In Ref. [27], ameliorated GWO is The grey wolf optimizer (GWO) algorithm was proposed
proposed to solve economic power load dispatch problems. In by Mirjalili et al. [5] in 2014. This algorithm mimics the
Ref. [28], the 𝛽-GWO algorithm is proposed by introducing hunting and leadership behavior of grey wolves. The grey

13
Engineering with Computers

wolves are the species which prefer to hunt their prey in a 2.3 Attacking and hunting the prey
group. Their group which includes 5–12 wolves is known as
pack. In the pack, the leadership hierarchy is maintained by Mirjalili et al. [5] modelled the hunting behavior of grey
dividing the group into four type of wolves—alpha wolves wolves by assuming the equal contribution of leading hunt-
(dominant wolf of the pack and are decision maker), beta ers at the time of determining prey location. Therefore,
wolves (subordinate to the alpha in their absence and works each wolf updates its location by following these leaders
as a messenger for the alpha), delta wolves (caretaker of as follows:
pack and protect the pack from enemies), and omega wolves
(rest of the wolves which have permission of eating food in

X1 = x𝛼,t − 𝜇𝛼 ⋅ d𝛼 , (6)
the last). The wolves, alpha, beta, and delta are known as
leading hunters for the pack and the pack is totally dependent �
on these wolves. The pack of wolves performed the process X2 = x𝛽,t − 𝜇𝛽 ⋅ d𝛽 , (7)
of hunting prey in three steps [31]—(i) chasing a prey; (ii)
encircling a prey; and (iii) attacking the prey. The math- �

ematical modelling of these steps is as follows:


X3 = x𝛿,t − 𝜇𝛿 ⋅ d𝛿 , (8)

� � �
(X1 + X2 + X3 )
Xt+1 = , (9)
2.1 Social and leadership behavior 3
where x𝛼 , x𝛽 , and x𝛿 are the locations of leading wolves.
To mimic the social and leadership behavior of wolves,
d𝛼 , d𝛽 , and d𝛿 are the difference vectors obtained using
the three fittest solutions in terms of fitness value are
Eq. (2) and 𝜇𝛼 , 𝜇𝛽 , and 𝜇𝛿 are the coefficient vectors obtained
selected as alpha, beta, and delta. The rest of the solutions
with the help of Eq. (3).
are assumed as omega wolves. The 𝜔 wolves iteratively
improve their states with the guidance of leading hunters.
2.4 Exploitation and exploration in GWO

2.2 Encircling the prey During the hunting process of wolves, it is observed that


when |𝜇| > 1 or c > 1 exploration is performed which mim-
To improve the omega wolves, the encircling behavior of ics the behavior of searching prey by wolves and when
wolf pack is mimicked in which the wolves update their |𝜇| < 1 or c < 1 , the exploitation of discovered search areas
location using the following equations: occurs which represents the attacking behavior of grey
wolves. From the search equation, it can be analyzed that
xt+1 = xp,t − 𝜇 ⋅ d, (1)
when t → maximum number of iterations , then 𝜇 → 0 ; there-
| | fore, in this case, the coefficient c is liable for exploring the
d = |c ⋅ xp,t − xt |, (2) search space. The framework of classical GWO algorithm
| |
is presented in Fig. 1.
𝜇 = 2 ⋅ b ⋅ r1 − b, (3)

c = 2 ⋅ r2 , , (4)
3 Proposed improved leadership‑inspired
where xt and xt+1 are the states of wolf at tth and (t + 1)th
GWO
iteration, respectively. xp,t is the state of the prey at tth itera-
tion. d is a difference vector, and c and 𝜇 are the random
3.1 Motivation
numbers which are employed in GWO to perform the explo-
ration and exploitation of search space. b is a random num-
Since the GWO algorithm mimics the leadership behavior
ber which is decreased linearly from 2 to 0 in the algorithm
and hunting strategies of grey wolf pack to update the solu-
and can be formulated as
tions, therefore, the search process is principally dependent
( ) on the leading hunters. From the search equations of GWO,
t
b=2−2⋅ (5) it can be analyzed that all the omega wolves update their
maximum number of iterations
state randomly based on the guidance provided by the lead-
and r1 , r2 are the random numbers lie in the interval (0, 1).
ing wolves. Moreover, the leading wolves update their state
The vector b helps to transit form the exploration to exploi-
by themselves or low fitted wolves which is not convinc-
tation phase.
ing and sometimes may not be able to produce promising

13
Engineering with Computers

Fig. 1  Pseudocode of classical
version of GWO

guidance for omega wolves. Therefore, when the leading where 𝛼 is scale parameter which controls the scale of the
hunters trapped in sub-optimal solutions, then for the pack, distribution and 𝛾 is location or shift parameter.
it is difficult to move out from that solutions. This situation In terms of Fourier transform, the Levy distribution is
occurs particularly when the fitness landscape of problem defined as follows:

F(k) = e −𝜇|k| , 0 < 𝛽 ≤ 2,


consists deep valleys. To avoid such situations and to jump [ ]
(12)
𝛽

towards more promising regions, leading search efficiency


of GWO can be improved. Therefore, in the present work, where the parameter 𝜇 known as scale parameter and which
the Levy-flight local search is introduced for leading wolves. lies in the interval [− 1, 1] and 𝛽 ∈ (0, 2] is known as Levy
The Levy-flight local search explores the regions around the index.
leaders to find comparatively better leading guidance. The In the present paper, the Levy-flight–flight local
Levy-flight search strategy is incorporated to improve the search is used only for the leading hunters, alpha, beta,
search efficiency of leading hunters. To establish a balance and delta to prevent from getting stuck in local optima.
between exploitation and exploration, a greedy selection is In the( proposed GLF–GWO,
) the state of ith leading wolf
also applied which maintains the strength of wolf pack and xi = xi1 , xi2 , … , xin for jth dimension (j = 1, 2, … , n) is
avoids the wolves to be diverge from the promising domains updated as follows:
of search space. �
xij = xij + par × s, (13)
3.2 Development of GLF–GWO algorithm
where xij is the ith updated leading wolf, s is the step length

To integrate a sufficient guidance and to enhance the explo- which is drawn from Levy distribution, par is a control
ration of new search regions, a Levy-flight motion which is parameter that control the step length, and numerically, par
a non-Gaussian random process and whose random steps is selected as a linearly decreasing vector which is defined
are selected from Levy distribution, is applied for the lead- as follows:
ing hunters of the grey wolf pack. Levy distribution can be ( )
t
defined by power law equation as par = 2 − 2 ⋅ . (14)
maximum number of iterations
L(s) ∼ |s|(−1−𝛽) , 0 < 𝛽 ≤ 2, (10) The parameter par is chosen as a linearly decreasing
where s is a step length and 𝛽 is Levy index. variable from 2 to 0. The reason for selecting this value of
A simple version of Levy distribution is defined as parameter par is to maintain the balance between explora-
� � � tion and exploitation. The initial values of the parameter
⎧ 𝛼
exp − 𝛼 1
if 0 < 𝛾 < s < ∞ par allow the wolves to explore the search space, so that
⎪ 2𝜋 2(s−𝛾) (s−𝛾)3∕2
L(s, 𝛼, 𝛾) = ⎨ the convergence towards the local optima can be avoided.

⎩ 0 if s ≤ 0, The low values of the parameter par which are produced
after the half of the maximum number of iterations in
(11)
the algorithm exploit the elite areas of the search space.

13
Engineering with Computers

Thus, parameter par helps to maintain the balance between Thus, the Levy-flight search maintains the balance between
exploration and exploitation in the algorithm. exploration and exploitation during the search.
To generate the step length in our Levy-flight strategy, The steps of the proposed GLF–GWO in algorithm form
Mantegna algorithm [32] for symmetric Levy distribution are explained in Fig. 2.
has been used. The term symmetric means that the step size
can be positive or negative. The calculated steps length s in 3.3 Constraint handling technique
Mantegna algorithm is given by
u To update the leaders for unconstrained problems, only fit-
s= , (15) ness (objective function value) of the wolves is considered,
|v|1∕𝛽
but to update the leaders in constrained problems, the con-
where u and v are two normal stochastic variables with straints should be handled though some mechanism. The
standard deviation 𝜎u and 𝜎v , that is constraint handling mechanism based on constraint viola-
( ) ( ) tion [33] which is integrated in the GLF–GWO is stated as
u ∼ N 0, 𝜎u2 and v ∼ N 0, 𝜎v2 ,
( )1∕𝛽 follows.
where 𝜎u = 𝛽𝛤 and 𝜎v = 1,where
𝛤 (1+𝛽) sin (𝜋𝛽∕2)
(1+𝛽)2(𝛽−1)∕2
1. Sort the wolf population (P) in accordance with their

increasing order of constraint violation value. Denote

𝛤 (1 + 𝛽) = x𝛽 e−x dx. (16) the new sorted population by P1.
0 2. Now, sort the feasible wolves in accordance of their
objective function values (increasing order for minimi-
( )
Particularly, if 𝛽 is an integer, then 𝛤 (1 + 𝛽) = 𝛽. zation problem). Denote this new population by P2  .
In the present work, the value of 𝛽 is fixed as 1. To estab- From the sorted population P2 , the top three wolves can
lish a balance between exploitation and exploration, a greedy be collected as leading hunters for the pack.
selection is introduced between the previous and present
states of wolves. The greedy selection also helps to avoid Here, the constraint violation violx [33], for any solution
the divergence of wolves from available promising areas of x for the general optimization problem
the search space. In the GLF–GWO algorithm, the reason for
( )
selecting the Levy-flight distribution to update the leaders Min f (x), x = x1 , x2 , … , xd ∈ Rd , (17)
is its random behavior. The step lengths produced by Levy-
flight distribution are very high sometimes due to its infinite s.t. gi (x) ≤ 0, i = 1, 2, … , p, (18)
variance which helps in avoiding the situation of stagnation
at local optima. The smaller values of the step length locally hj (x) = 0, j = 1, 2, … , m (19)
explored the search space and extract the useful information
can be calculated as
available at discovered promising areas of the search space.

Fig. 2  Pseudocode of the pro-


posed GLF–GWO algorithm

13
Engineering with Computers


p

m
deviation (SD), median (med), average (avg), and maximum
violx = Gi (x) + Hj (x), (max) of the absolute error values in objection function. The
i=1 j=1 (20) absolute error is defined as |F(x) − F(x∗ )| , where x is the
{ obtained feasible solution and x∗ is the optimal solution for
gi (x) if gi (x) > 0 the problem. F(x) represents the objective function value at
where Gi (x) = (21)
0 otherwise vector x.
For both the dimensions 10 and 30, the superior perfor-
{ mance of the GLF–GWO can be observed in all the sta-
| |
|hj (x)| if |hj (x)|− ∈> 0 tistical measures on unimodal test problems (F1–F3). The
and Hj (x) = | | (22)
0 otherwise, better search efficiency on unimodal problems shows that
the Levy-flight local search with low values of step length
where ∈ is a parameter which is fixed as 10−4 in the present and greedy selection has enhanced the exploitation abil-
paper. f , hj , gi are objective function, equality, and inequality ity and convergence rate of grey wolves in the GLF–GWO
constraints, respectively. m and p represent the number of algorithm.
equality and inequality constraints, respectively. The results on multimodal test functions, which are used
The proposed constrained handling technique is a sim- to judge the exploration ability of any metaheuristic algo-
plest and natural way of picking the best solutions. The main rithm, indicate that the GLF–GWO has enhanced the explo-
feature of this technique is the simplicity and parameter free ration ability of search agents in the GLF–GWO algorithm.
structure. Since any extra parameter other than the algorithm In 10-dimensional multimodal problems F7, F8, F10, F12,
parameters are not required in this technique, therefore, it is and F15, the proposed GLF–GWO outperform classical
easy in implementation in any algorithm. GWO in all the statistical measures such as mean, median,
minimum, maximum, and standard deviation of error val-
ues. In problem F4, the GLF–GWO is better than classi-
4 Details of benchmark suites cal GWO only for minimum and standard deviation value
and numerical experimentation of error. In F5, F6, and F16, except for standard deviation
value, the GLF–GWO is better than the classical GWO. In
In the present section, the proposed GLF–GWO is validated F9, F13, and F14, except for the minimum value of error,
on unconstrained problems of IEEE CEC2014 and constraint the GLF–GWO is better than the classical GWO. In F11,
problems of IEEE CEC2006. The unconstrained benchmark the GLF–GWO is better than the classical GWO only for
set contains 30 problems of various categories such as multi- minimum and maximum values of error. In 30-dimensional
modal, unimodal, composite, and hybrid and the details can multimodal problems F4, F5, F7–F9, F13, and F15, the
be found in Ref. [34]. The constrained problem set has 24 proposed GLF–GWO outperforms classical GWO in all
problems of various complexity levels with inequality and/ the statistics. In F6, the GLF–GWO is better than classical
or equality constraints which can be found in Ref. [35]. In GWO only for the minimum value of error. In F10, except
this paper, all the experiments are performed on MATLAB for median and mean value of error, the GLF–GWO is better
2010a with 4 GB RAM system. than classical GWO. In F11, the GLF–GWO is better than
In this paper, all the results are presented as per the guide- classical GWO only for a median value of error. In F12 and
lines of CEC. The maximum function evaluations as a ter- F14, the GLF–GWO is better than the classical GWO for all
mination criteria are adopted the same as decided by CEC in the statistics except the minimum value of error. In F16, the
Refs. [34, 35]. The size of the population of wolves is taken GLF–GWO is better than the classical GWO only for stand-
as 3× dimension of the problem, for each benchmark set. ard deviation value of error. Overall, from the numerical
results, it can be analyzed that the Levy-flight search strategy
4.1 Numerical results and discussion on CEC 2014 which enhances the search ability of the leading wolves suc-
benchmark set cessfully improves the exploration ability of wolves in the
GLF–GWO.
In this section, the proposed GLF–GWO algorithm is vali- In any optimization algorithm, an appropriate synergy
dated on CEC 2014 benchmark set of unconstraint prob- between exploitation and exploration should be present for
lems. The 10- and 30-dimensional problems are considered the proper functioning of an algorithm. This characteristic of
in our study. The numerical results on CEC2014 problems an algorithm can be verified through hybrid and composite
obtained from the GLF–GWO algorithm and classical GWO problems. In IEEE CEC 2014 benchmark set, the problems
are presented in Tables 1 and 2 corresponding to the 10- and from F17 to F22 are hybrid problems, and from F23 to F30,
30-dimensional test problems. Tables 1 and 2 provide vari- the problems are composite. In these problems, the objective
ous statistical measures such as minimum (min), standard function is designed by combining the features of unimodal

13
Engineering with Computers

Table 1  Simulated results for Function Algorithm Med Avg Min Max SD


the CEC 2014 benchmark suite
corresponding to the dimension F1 GWO 6.90E+06 7.10E+06 2.13E+05 1.97E+07 4.81E+06
10
GLF–GWO 8.68E+04 1.13E+05 5.88E+03 4.89E+05 9.50E+04
F2 GWO 1.30E+03 1.16E+07 2.65E+02 2.69E+08 5.28E+07
GLF–GWO 5.79E+02 1.71E+03 3.92E+01 1.16E+04 2.64E+03
F3 GWO 4.65E+03 5.22E+03 5.99E+02 1.37E+04 3.67E+03
GLF–GWO 2.80E+02 4.23E+02 7.24E−01 1.63E+03 4.24E+02
F4 GWO 3.52E+01 3.34E+01 5.77E−01 6.60E+01 9.69E+00
GLF–GWO 3.48E+01 2.80E+01 5.89E−01 3.49E+01 1.31E+01
F5 GWO 2.04E+01 2.04E+01 2.02E+01 2.05E+01 7.36E−02
GLF–GWO 2.00E+01 1.96E+01 1.03E−02 2.01E+01 2.80E+00
F6 GWO 1.70E+00 1.98E+00 1.59E−01 5.17E+00 1.05E+00
GLF–GWO 1.45E+00 1.61E+00 9.25E−02 4.90E+00 1.30E+00
F7 GWO 1.02E+00 1.23E+00 9.27E−02 3.80E+00 8.69E−01
GLF–GWO 1.26E−01 1.28E−01 1.93E−02 2.60E−01 5.53E−02
F8 GWO 7.97E+00 9.09E+00 9.99E−01 2.40E+01 4.89E+00
GLF–GWO 4.97E+00 5.37E+00 9.95E−01 1.29E+01 2.46E+00
F9 GWO 1.24E+01 1.33E+01 3.03E+00 2.91E+01 6.18E+00
GLF–GWO 1.19E+01 1.23E+01 3.98E+00 2.09E+01 4.05E+00
F10 GWO 2.68E+02 3.12E+02 2.12E+01 8.02E+02 1.75E+02
GLF–GWO 1.94E+02 2.19E+02 6.96E+00 4.34E+02 1.24E+02
F11 GWO 3.90E+02 4.43E+02 1.22E+02 1.31E+03 2.29E+02
GLF–GWO 5.77E+02 5.71E+02 3.55E+00 1.29E+03 2.80E+02
F12 GWO 5.28E−01 6.59E−01 1.68E−02 1.76E+00 5.45E−01
GLF–GWO 1.02E−01 1.09E−01 1.49E−02 4.44E−01 7.49E−02
F13 GWO 1.76E−01 1.68E−01 6.03E−02 2.69E−01 5.27E−02
GLF–GWO 1.21E−01 1.37E−01 7.22E−02 2.64E−01 4.84E−02
F14 GWO 1.68E−01 2.42E−01 3.09E−02 6.68E−01 1.90E−01
GLF–GWO 1.38E−01 1.64E−01 5.14E−02 6.16E−01 9.78E−02
F15 GWO 1.81E+00 1.82E+00 3.81E−01 3.41E+00 7.98E−01
GLF–GWO 8.75E−01 9.35E−01 3.48E−01 1.98E+00 3.84E−01
F16 GWO 2.55E+00 2.52E+00 1.32E+00 3.53E+00 4.69E−01
GLF–GWO 2.30E+00 2.30E+00 8.11E−01 3.39E+00 5.49E−01
F17 GWO 3.05E+03 2.19E+04 9.57E+02 5.61E+05 9.06E+04
GLF–GWO 2.25E+03 3.23E+03 2.02E+02 1.18E+04 2.97E+03
F18 GWO 7.20E+03 7.93E+03 1.32E+02 1.58E+04 5.40E+03
GLF–GWO 7.83E+03 9.45E+03 3.37E+01 2.92E+04 7.77E+03
F19 GWO 2.27E+00 2.61E+00 1.09E+00 5.79E+00 1.13E+00
GLF–GWO 1.92E+00 2.11E+00 8.67E−01 3.72E+00 7.41E−01
F20 GWO 3.57E+03 3.49E+03 3.07E+01 1.77E+04 3.90E+03
GLF–GWO 1.63E+01 1.66E+01 3.18E+00 4.80E+01 8.85E+00
F21 GWO 5.15E+03 6.26E+03 4.30E+02 1.37E+04 4.44E+03
GLF–GWO 4.02E+03 4.24E+03 4.93E+01 1.19E+04 3.73E+03
F22 GWO 4.28E+01 8.19E+01 2.17E+01 1.71E+02 6.08E+01
GLF–GWO 3.70E+01 6.26E+01 3.29E+00 1.64E+02 5.66E+01
F23 GWO 3.31E+02 3.33E+02 3.29E+02 3.42E+02 3.41E+00
GLF–GWO 3.29E+02 3.29E+02 3.29E+02 3.29E+02 2.97E−05
F24 GWO 1.26E+02 1.40E+02 1.12E+02 2.04E+02 3.16E+01
GLF–GWO 1.18E+02 1.26E+02 1.08E+02 2.03E+02 2.43E+01
F25 GWO 2.00E+02 1.99E+02 1.74E+02 2.03E+02 5.06E+00
GLF–GWO 2.00E+02 1.89E+02 1.28E+02 2.04E+02 1.94E+01
F26 GWO 1.00E+02 1.00E+02 1.00E+02 1.00E+02 3.77E−02

13
Engineering with Computers

Table 1  (continued) Function Algorithm Med Avg Min Max SD

GLF–GWO 1.00E+02 1.00E+02 1.00E+02 1.00E+02 4.08E−02


F27 GWO 3.44E+02 2.95E+02 1.89E+00 4.43E+02 1.49E+02
GLF–GWO 3.46E+02 3.09E+02 1.95E+00 4.66E+02 1.27E+02
F28 GWO 4.69E+02 4.46E+02 3.57E+02 6.19E+02 6.47E+01
GLF–GWO 4.29E+02 4.38E+02 3.57E+02 6.62E+02 6.25E+01
F29 GWO 6.41E+02 3.60E+05 3.15E+02 2.14E+06 7.85E+05
GLF–GWO 3.83E+02 1.69E+05 2.49E+02 1.72E+06 5.18E+05
F30 GWO 9.30E+02 1.15E+03 4.96E+02 2.91E+03 6.23E+02
GLF–GWO 7.08E+02 8.65E+02 3.46E+02 1.61E+03 3.54E+02

The better results are highlighted in bold

and multimodal problems. In all the 10-dimensional hybrid pack based on Levy-flight distribution enhances the search
problems except F18, the proposed GLF–GWO provides efficiency of the wolves in the GLF–GWO algorithm. The
better results as compared to classical GWO in terms of results also demonstrate that the GLF–GWO algorithm is a
mean, median, minimum, maximum, and standard devia- better optimizer as compared to the classical version of GWO.
tion of error values. In F18, the GLF–GWO is able to pro- The diversity analysis in GLF–GWO can be done by
vide the better minimum value of error only than classical comparing the diversity curves of classical GWO and
GWO. In all the 30-dimensional hybrid problems, the pro- GLF–GWO. These diversity curves are drawn by consid-
posed GLF–GWO algorithm outperforms in all the statistics ering the average distance between the solutions in each
such as mean, median, minimum, maximum, and standard iteration. To calculate the average distance,
( the Euclidean)
deviation of error values as compared to the classical GWO. distance
( ||.|| between) two solutions X = x1 , x2 , … , xD and
In 10-dimensional composite problems F23, F24, F29, Y = y1 , y2 , … , yD is used which is calculated as follows:
and F30, the GLF–GWO provides better results in terms of √
√D
mean, median, minimum, maximum, and standard devia- √∑ ( )2
tion of error values. In F25, except for the maximum and ||X − Y||2 = √ xj − yj , (23)
j=1
standard deviation, in F26, except for standard deviation, in
28, except for maximum error value, the GLF–GWO out- where D represents the dimension of the problem. From the
performs classical GWO in remaining statistical measures. diversity curves drawn in Figs. 3 and 4, it can be observed
In F27, the GLF–GWO provides a better value of standard that in the initial iterations, the average distance between
deviation only as compared to classical GWO. In 30-dimen- the search agents is high and decreased with increase in
sional composite problems F23, F26, F29, and F30, the the number of iterations for both the algorithms classical
GLF–GWO provides better results in all the statistics as GWO and proposed GLF–GWO. However, in most of the
compared to the classical GWO. In F24, the GLF–GWO test functions, the average distance in each iteration is high
and classical GWO provide the same value of median and for the proposed GLF–GWO algorithm as compared to clas-
minimum error, while in terms of other statistics, the clas- sical GWO which shows the better ability of search in the
sical GWO is better than GLF–GWO. In F25, except for GLF–GWO in terms of exploring new search regions of the
the mean, minimum, and maximum value of error, in F27, search space. This shows the effect of improving the search
except for mean, maximum, and standard deviation value of mechanism of leading wolves through Levy-flight search
error, the proposed GLF–GWO is better than classical GWO. strategy in the GLF–GWO.
In F28, the classical GWO provides better results in terms of
all the statistical measures as compared to the GLF–GWO.
Hence, the performance comparison between classical GWO 4.2 Statistical validity of the results
and proposed GLF–GWO on hybrid and composite prob-
lems demonstrates the efficacy of the proposed strategies In this section, to confirm that the better results which are
(Levy-flight local search and greedy selection) in maintain- obtained through the proposed GLF–GWO are not just by a
ing an appropriate balance of exploration and exploitation chance, a non-parametric Wilcoxon rank sum is used. The
during the search. statistical test is performed at 0.05 level of confidence inter-
Overall, from the experimental results on various catego- val. The statistical conclusions which are drawn by apply-
ries of benchmark problems, it can be concluded that the ing Wilcoxon test between classical GWO and proposed
proposed search mechanism for leading wolves of grey wolf GLF–GWO are presented in Tables 3 and 4 corresponding

13
Engineering with Computers

Table 2  Simulated results for Function Algorithm Med Avg Min Max SD


the CEC 2014 benchmark suite
corresponding to the dimension F1 GWO 5.51E+07 5.84E+07 5.17E+06 1.40E+08 3.40E+07
30
GLF–GWO 4.70E+06 5.05E+06 1.12E+06 1.32E+07 2.37E+06
F2 GWO 1.59E+09 2.55E+09 1.54E+08 1.07E+10 2.42E+09
GLF–GWO 9.31E+03 9.94E+03 1.11E+03 3.37E+04 7.52E+03
F3 GWO 3.05E+04 3.11E+04 1.68E+04 5.00E+04 7.91E+03
GLF–GWO 1.16E+03 2.12E+03 2.07E+00 7.13E+03 2.33E+03
F4 GWO 2.38E+02 2.42E+02 1.02E+02 4.67E+02 7.00E+01
GLF–GWO 1.02E+02 1.03E+02 5.35E+01 1.83E+02 2.82E+01
F5 GWO 2.09E+01 2.09E+01 2.08E+01 2.10E+01 4.78E−02
GLF–GWO 2.00E+01 2.00E+01 2.00E+01 2.01E+01 2.29E−02
F6 GWO 1.29E+01 1.33E+01 7.08E+00 2.03E+01 2.87E+00
GLF–GWO 1.45E+01 1.53E+01 4.98E+00 2.39E+01 4.18E+00
F7 GWO 1.30E+01 1.90E+01 2.95E+00 5.46E+01 1.52E+01
GLF–GWO 4.25E−02 4.51E−02 6.42E−03 1.02E−01 1.88E−02
F8 GWO 7.88E+01 8.14E+01 4.64E+01 1.36E+02 2.02E+01
GLF–GWO 4.18E+01 4.36E+01 2.79E+01 8.26E+01 1.13E+01
F9 GWO 9.93E+01 1.04E+02 5.57E+01 2.08E+02 3.26E+01
GLF–GWO 9.40E+01 9.44E+01 4.58E+01 1.56E+02 2.18E+01
F10 GWO 2.17E+03 2.26E+03 1.12E+03 4.12E+03 4.97E+02
GLF–GWO 2.49E+03 2.47E+03 1.01E+03 3.36E+03 4.78E+02
F11 GWO 2.65E+03 2.71E+03 1.53E+03 3.68E+03 5.46E+02
GLF–GWO 3.64E+03 3.65E+03 2.08E+03 4.88E+03 6.19E+02
F12 GWO 2.28E+00 1.80E+00 5.94E−02 3.08E+00 1.06E+00
GLF–GWO 5.19E−01 5.40E−01 2.16E−01 1.17E+00 1.76E−01
F13 GWO 4.11E−01 4.46E−01 2.58E−01 2.22E+00 2.74E−01
GLF–GWO 2.94E−01 3.05E−01 1.51E−01 4.86E−01 6.29E−02
F14 GWO 6.70E−01 3.57E+00 1.46E−01 1.88E+01 4.92E+00
GLF–GWO 3.29E−01 3.62E−01 1.63E−01 6.97E−01 1.38E−01
F15 GWO 2.89E+01 1.99E+02 6.27E+00 3.13E+03 5.76E+02
GLF–GWO 1.23E+01 1.26E+01 5.60E+00 2.04E+01 3.56E+00
F16 GWO 1.08E+01 1.08E+01 9.14E+00 1.22E+01 7.26E−01
GLF–GWO 1.14E+01 1.14E+01 9.79E+00 1.33E+01 6.58E−01
F17 GWO 8.27E+05 1.49E+06 6.92E+04 1.01E+07 2.00E+06
GLF–GWO 3.63E+05 4.83E+05 6.45E+04 1.38E+06 3.10E+05
F18 GWO 1.15E+04 8.49E+06 3.62E+02 6.83E+07 2.07E+07
GLF–GWO 1.79E+03 3.18E+03 3.05E+02 1.82E+04 3.66E+03
F19 GWO 2.64E+01 3.84E+01 9.11E+00 1.00E+02 2.49E+01
GLF–GWO 1.39E+01 1.44E+01 8.66E+00 6.75E+01 7.92E+00
F20 GWO 1.26E+04 1.56E+04 3.94E+03 6.78E+04 1.15E+04
GLF–GWO 2.77E+02 5.27E+02 1.29E+02 8.33E+03 1.24E+03
F21 GWO 2.41E+05 4.36E+05 3.28E+04 1.40E+06 3.70E+05
GLF–GWO 1.85E+05 2.33E+05 2.25E+04 1.29E+06 2.32E+05
F22 GWO 3.42E+02 3.54E+02 1.61E+02 9.27E+02 1.52E+02
GLF–GWO 3.07E+02 3.29E+02 4.90E+01 6.33E+02 1.40E+02
F23 GWO 3.31E+02 3.34E+02 3.18E+02 3.67E+02 1.05E+01
GLF–GWO 3.15E+02 3.16E+02 3.15E+02 3.17E+02 3.66E−01
F24 GWO 2.00E+02 2.00E+02 2.00E+02 2.00E+02 7.85E−04
GLF–GWO 2.00E+02 2.01E+02 2.00E+02 2.25E+02 3.57E+00
F25 GWO 2.11E+02 2.10E+02 2.00E+02 2.19E+02 5.30E+00
GLF–GWO 2.11E+02 2.12E+02 2.04E+02 2.26E+02 4.48E+00
F26 GWO 1.01E+02 1.45E+02 1.00E+02 2.00E+02 5.00E+01

13
Engineering with Computers

Table 2  (continued) Function Algorithm Med Avg Min Max SD

GLF–GWO 1.00E+02 1.28E+02 1.00E+02 2.00E+02 4.49E+01


F27 GWO 6.70E+02 6.54E+02 4.21E+02 9.17E+02 1.28E+02
GLF–GWO 6.65E+02 6.71E+02 4.04E+02 1.06E+03 1.55E+02
F28 GWO 1.05E+03 1.11E+03 8.25E+02 1.57E+03 2.15E+02
GLF–GWO 1.14E+03 1.22E+03 9.52E+02 2.21E+03 2.74E+02
F29 GWO 8.86E+04 1.40E+06 5.25E+03 1.16E+07 2.71E+06
GLF–GWO 3.86E+03 1.82E+05 1.59E+03 9.01E+06 1.26E+06
F30 GWO 4.07E+04 4.82E+04 1.26E+04 2.41E+05 3.39E+04
GLF–GWO 6.42E+03 7.20E+03 2.26E+03 1.43E+04 2.61E+03

The better results are highlighted in bold

to the 10- and 30-dimensional test problems. In the table, the to enhance the exploitation strength of the GLF–GWO
symbols ‘+/=/−’ are used to indicate that the GLF–GWO algorithm.
algorithm is significantly better, same, or worse than the In multimodal problems, F4, F5, F7, and F12–F15, the
classical version of GWO. From the statistical conclusions, proposed GLF–GWO provides better results as compared to
it can be seen that the proposed GLF–GWO is significantly all other comparative algorithms. In F6, the GLF–GWO pro-
outperforming the classical GWO. vides better results than other algorithms except for the PSO
and modGWO. In F8–F10 and F16, except for the PSO, the
4.3 Comparison of GLF–GWO with other proposed GLF–GWO provides better result than other algo-
optimization methods rithms. In F11, except for the PSO, modGWO and IGWO,
the GLF–GWO performs better than other algorithms in
This section compares the performance of the proposed terms of providing a better mean value of error. Thus, on
GLF–GWO with classical GWO [5], PSO [1], variants of analyzing the comparative performance of the GLF–GWO
GWO such as modified GWO (modGWO) [36], improved and other algorithms on multimodal problems, it can be
GWO (IGWO) [37], opposition-based GWO (OBGWO) concluded that the proposed Levy-flight search strategy for
[38], and exploration-enhanced GWO (EEGWO) [39], and updating the leaders of the grey wolf pack has enhanced the
some recent algorithms such as sine cosine algorithm (SCA) explorative ability of all wolves.
[40], moth-flame optimization (MFO) algorithm [41]. To In all the hybrid problems (F17–F22), the proposed
compare the results, 30-dimensional problems have been GLF–GWO provides less value of mean error as compared
taken. For a fair comparison, 51 independent runs are exe- to all other comparative algorithms. In composite problems
cuted corresponding to each test function and termination F24, F29, and F30, the proposed GLF–GWO outperforms
criteria are take as decided by CEC which is 104 × D func- all other comparative algorithms. In F23, the GLF–GWO
tion evaluations. The obtained results from various other provides a better result than all other comparative algorithms
comparative optimization methods algorithms are shown in except EEGWO. In F25, the GLF–GWO performs better
Table 5. In the table, the comparison is done by reporting than MFO and SCA only. In F26, IGWO, MFO, and SCA
the mean objective function value. The table indicates the perform better than GLF–GWO. In F27, PSO, modGWO,
efficacy of the proposed GLF–GWO algorithm as compared and EEGWO perform better than GLF–GWO. In F28, the
to other optimization methods. proposed GLF–GWO provides a better mean value of error
In all the unimodal problems from F1 to F3, the proposed as compared to all other comparative algorithms except
GLF–GWO outperforms classical GWO, other variants of modGWO and MFO. Overall, the performance comparison
GWO such as modGWO, OBGWO, IGWO, EEGWO, and on hybrid and composition demonstrate the better ability of
other optimization methods such as PSO, MFO, and SCA search in the GLF–GWO as compared to other algorithms
in terms of the mean value of the absolute error in objective in most of the problems. From the results, it can also be ana-
function values. Thus, the analysis of results obtained by lyzed that the proposed Levy-flight search strategy maintains
the GLF–GWO and the performance comparison with other the exploration and exploitation in the algorithm.
algorithms shows that the GLF–GWO algorithm is better in Thus, the overall analysis of results demonstrates that the
exploitation and convergence rate as compared to other com- proposed GLF–GWO algorithm explores the search space
parative algorithms. The results also show that the greedy more efficiently as compared to the classical GWO by pro-
selection mechanism and Levy-flight local search strategy, viding a suitable search mechanism to the leading the hunt-
when produced step length is small, contribute their impact ers of the pack. The greedy selection also shows its ability to

13
Engineering with Computers

Function F1 Function F3
450 500
GWO GWO
400 GLF-GWO 450 GLF-GWO

400
350

350
300
300
Diversity

Diversity
250
250
200
200
150
150

100
100

50 50

0 0
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iterations Iterations

Function F5 Function F8
500 450
GWO GWO
450 GLF-GWO 400 GLF-GWO

400
350
350
300
300
Diversity

Diversity
250
250
200
200
150
150

100
100

50 50

0 0
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iterations Iterations

Function F9 Function F13


450 500
GWO GWO
400 GLF-GWO 450 GLF-GWO

400
350

350
300
300
Diversity
Diversity

250
250
200
200
150
150

100 100

50 50

0 0
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iterations Iterations

Function F15
450 Function F16
GWO 450
400 GLF-GWO GWO
400 GLF-GWO

350
350

300
300
Diversity

250
Diversity

250

200
200

150 150

100 100

50 50

0 0
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iterations Iterations

Fig. 3  Diversity analysis in the GLF–GWO algorithm

13
Engineering with Computers

Function F17 Function F19


450 450
GWO GWO
400 GLF-GWO 400 GLF-GWO

350 350

300 300
Diversity

Diversity
250 250

200 200

150 150

100 100

50 50

0 0
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iterations Iterations

Function F22 Function F23


500 500
GWO GWO
450 GLF-GWO 450 GLF-GWO

400 400

350 350

300 300

Diversity
Diversity

250 250

200 200

150 150

100 100

50 50

0 0
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iterations Iterations

Function F25 Function F26


450 500
GWO GWO
400 GLF-GWO 450 GLF-GWO

400
350

350
300
300
Diversity

Diversity

250
250
200
200
150
150
100
100

50 50

0 0
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iterations Iterations

Function F27 Function F30


450 500
GWO GWO
400 GLF-GWO 450 GLF-GWO

400
350
350
300
300
Diversity
Diversity

250
250
200
200
150
150

100
100

50 50

0 0
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iterations Iterations

Fig. 4  Diversity analysis in the GLF–GWO algorithm

13
Engineering with Computers

Table 3  Statistical validity of results on 10-dimensional CEC 2014 CEC2014 benchmark problems are plotted in this section.
benchmark problems The convergence curves are plotted in Figs. 5, 6, 7 by con-
Function Conclu- p value Function Conclu- p value sidering the average function values of objective function
sion sion achieved in 51 independent runs. In the curves, the itera-
tions of an algorithm are shown on the horizontal axis and
F1 + 5.15E−10 F16 + 2.51E−02
the objective function value is depicted on the vertical axis.
F2 + 9.17E−03 F17 + 1.53E−03
From the convergence curves, it can be seen that the proposed
F3 + 6.53E−10 F18 = 0
GLF–GWO provides a better convergence rate as compared
F4 + 1.76E−05 F19 + 1.05E−02
to other improved variants of GWO and some recent optimi-
F5 + 5.15E−10 F20 + 5.15E−10
zation methods while solving the benchmark test problems.
F6 = 0 F21 + 2.33E−02
F7 + 7.35E−10 F22 + 2.22E−02
F8 + 4.74E−05 F23 + 5.15E−10
F9 = 0 F24 + 3.31E−04
4.5 Numerical results and discussion on CEC 2006
F10 + 7.65E−04 F25 – 1.30E−02
benchmark set
F11 – 2.45E−02 F26 = 0
The constrained optimization problems are more difficult to
F12 + 2.44E−08 F27 = 0
solve compared to unconstrained problems, because various
F13 + 3.15E−03 F28 = 0
factors such as type of constraints, type of objective func-
F14 + 4.10E−02 F29 + 9.93E−07
tion, the ratio of the feasible region to the complete search
F15 + 2.66E−07 F30 + 7.55E−03
space, and number of constraints affect the difficulty of the
problem. In this section, the IEEE CEC 2006 benchmark
set of constraint problems is taken to investigate the effect
Table 4  Statistical validity of results on 30-dimensional CEC 2014 of improving the leading guidance in the GLF–GWO algo-
benchmark problems rithm. The obtained results by implementing the proposed
Function Conclu- p value Function Conclu- p value GLF–GWO algorithm and classical GWO on IEEE CEC
sion sion 2006 problems are presented in Tables 6. In the GLF–GWO,
to handle the constraints of CEC 2006 problems, constraint
F1 + 5.15E−10 F16 – 3.43E−04
handling techniques which is presented in Sect.  3.3 is
F2 + 5.15E−10 F17 + 1.36E−04
applied. In Table 6, worst, best, median, standard deviation,
F3 + 5.15E−10 F18 + 1.92E−06
and average of the objective function values are also listed.
F4 + 5.80E−10 F19 + 1.58E−08
In all the benchmark test problems (except for the prob-
F5 + 5.15E−10 F20 + 5.46E−10
lem g10) given in IEEE CEC 2006, the GLF–GWO performs
F6 – 1.27E−02 F21 + 3.89E−03
better than the classical GWO in terms of the best objective
F7 + 5.15E−10 F22 + 0
function value. In the test problems g01–g03, g06–g09, g13,
F8 + 6.15E−10 F23 + 5.15E−10
g15, g16, g18, and g23, the proposed GLF–GWO provides a
F9 = 0 F24 – 5.15E−10
better median value of the objective function as compared to
F10 – 1.41E−02 F25 = 0
classical GWO. In terms of mean objective function value,
F11 + 1.67E−08 F26 = 0
the GLF–GWO is better than the classical GWO in prob-
F12 + 1.78E−07 F27 = 0
lems g01, g03, g06, g07, g09, g14, g15, and g18. In terms
F13 + 1.32E−06 F28 – 3.41E−02
of maximum or worst value of the objective function, the
F14 + 1.69E−03 F29 + 1.86E−08
proposed GLF–GWO algorithm is better in g01–g03, g06,
F15 + 9.14E−09 F30 + 5.15E−10
g07, g09, g14, g15, g18, and g24 as compared to classical
GWO. In terms of standard deviation value, the proposed
GLF–GWO algorithm is better than classical GWO in the
maintain a balance between exploration and exploitation and problems g01, g03, g06, g07, g09, g10, g14, g15, g18, and
to avoid the divergence of wolves from available promising g24. In the problems g08 and g12, the proposed GLF–GWO
areas of the search space. and classical GWO algorithms provide the same values of
the objective function in terms of mean, median, minimum,
4.4 Convergence analysis maximum,and standard deviation. In the problems g05, g17,
g20–g22, both the algorithms classical GWO and proposed
To analyze the convergence rate in the proposed GLF–GWO GLF–GWO fail to provide a feasible solution. Since in these
algorithm, and to compare the convergence behavior with problems, the classical GWO and the GLF–GWO algorithms
other optimization algorithms, the convergence curves for fail to enter in a feasible region; therefore, these are not

13
Engineering with Computers

Table 5  Comparison of objective function values for CEC2014 benchmark suite


Function PSO modGWO OBGWO IGWO EEGWO MFO SCA GWO GLF–GWO

F1 1.92E+07 5.43E+07 3.19E+08 2.71E+08 1.77E+09 1.19E+08 5.44E+08 5.84E+07 5.05E+06


F2 2.53E+09 2.36E+09 1.42E+10 1.93E+10 8.18E+10 1.29E+10 3.23E+10 2.55E+09 9.94E+03
F3 6.13E+03 2.66E+04 8.81E+04 4.04E+04 1.08E+05 1.03E+05 5.14E+04 3.11E+04 2.12E+03
F4 1.77E+02 2.55E+02 8.09E+02 1.25E+03 1.65E+04 1.16E+03 3.45E+03 2.42E+02 1.03E+02
F5 2.08E+01 2.09E+01 2.12E+01 2.09E+01 2.12E+01 2.03E+01 2.09E+01 2.09E+01 2.00E+01
F6 1.14E+01 1.39E+01 3.30E+01 1.99E+01 4.51E+01 2.37E+01 3.72E+01 1.33E+01 1.53E+01
F7 3.65E+01 2.49E+01 9.51E+01 1.87E+02 8.27E+02 1.14E+02 3.19E+02 1.90E+01 4.51E−02
F8 3.59E+01 7.30E+01 4.32E+02 1.31E+02 3.93E+02 1.51E+02 2.82E+02 8.14E+01 4.36E+01
F9 6.89E+01 9.95E+01 3.12E+02 1.48E+02 3.99E+02 2.24E+02 3.01E+02 1.04E+02 9.44E+01
F10 1.32E+03 2.55E+03 7.29E+03 3.01E+03 8.48E+03 3.45E+03 6.77E+03 2.26E+03 2.47E+03
F11 2.86E+03 3.03E+03 7.08E+03 3.42E+03 9.07E+03 4.21E+03 7.11E+03 2.71E+03 3.65E+03
F12 6.37E−01 2.10E+00 2.75E+00 2.35E+00 5.06E+00 5.10E−01 2.42E+00 1.80E+00 5.40E−01
F13 7.14E−01 4.40E−01 2.90E+00 3.22E+00 8.76E+00 2.11E+00 5.03E+00 4.46E−01 3.05E−01
F14 9.12E+00 3.26E+00 3.38E+01 5.11E+01 3.06E+02 3.21E+01 1.09E+02 3.57E+00 3.62E−01
F15 2.10E+01 1.48E+02 1.67E+03 9.51E+03 4.61E+05 1.68E+05 2.17E+04 1.99E+02 1.26E+01
F16 1.10E+01 1.15E+01 1.40E+01 1.21E+01 1.40E+01 1.29E+01 1.30E+01 1.08E+01 1.14E+01
F17 5.16E+05 1.40E+06 1.72E+07 7.30E+06 2.76E+08 4.26E+06 1.92E+07 1.49E+06 4.83E+05
F18 9.90E+04 9.09E+06 2.39E+08 3.72E+07 7.62E+09 5.44E+07 9.51E+08 8.49E+06 3.18E+03
F19 1.45E+01 3.70E+01 1.39E+02 1.39E+02 5.49E+02 7.60E+01 2.26E+02 3.84E+01 1.44E+01
F20 5.14E+03 1.26E+04 2.61E+05 4.09E+04 4.74E+06 6.09E+04 4.64E+04 1.56E+04 5.27E+02
F21 1.19E+05 7.93E+05 1.21E+07 2.90E+06 1.44E+08 9.77E+05 4.41E+06 4.36E+05 2.33E+05
F22 4.07E+02 3.73E+02 1.20E+03 5.61E+02 5.72E+04 7.59E+02 1.30E+03 3.54E+02 3.29E+02
F23 3.29E+02 3.35E+02 4.09E+02 3.73E+02 2.00E+02 3.70E+02 5.06E+02 3.34E+02 3.16E+02
F24 2.30E+02 2.02E+02 2.02E+02 2.02E+02 2.01E+02 2.70E+02 2.12E+02 2.00E+02 2.01E+02
F25 2.06E+02 2.09E+02 2.00E+02 2.05E+02 2.00E+02 2.14E+02 2.36E+02 2.10E+02 2.12E+02
F26 1.49E+02 1.48E+02 1.48E+02 1.01E+02 1.96E+02 1.02E+02 1.05E+02 1.45E+02 1.28E+02
F27 7.25E+02 6.60E+02 1.04E+03 8.04E+02 2.00E+02 9.27E+02 7.50E+02 6.54E+02 6.71E+02
F28 1.85E+03 1.13E+03 2.50E+03 1.38E+03 2.00E+03 1.08E+03 3.03E+03 1.11E+03 1.22E+03
F29 9.78E+06 9.38E+05 1.38E+08 6.96E+06 2.00E+05 3.28E+06 5.28E+07 1.40E+06 1.82E+05
F30 6.30E+04 4.07E+04 3.79E+06 1.62E+05 4.14E+05 5.61E+04 8.30E+05 4.82E+04 7.20E+03

mentioned in Table 6. Overall, from the numerical results, application problems, where the decision parameters are
it can be observed that the Levy-flight search strategy shows very crucial, posterior distribution analysis of optimization
its impact to solve constrained optimization problems also. parameters [42] can be used.
To make concrete conclusions about the significance of
differences in the performance of the proposed GLF–GWO 4.6 Comparison of GLF–GWO with other
and GWO algorithms, the statistical analysis is necessary. In optimization methods
the present paper, a non-parametric Wilcoxon rank sum test
is used to accomplish this analysis. The test has been con- In this section, the search efficiency of the constrained ver-
ducted at a 5% significance level and the obtained conclu- sion of proposed GLF–GWO has been compared with classical
sions are shown in Table 7. The statistical results also verify GWO [5], PSO [1], variants of GWO such as modified GWO
the impact of Levy-flight search mechanism in updating the (modGWO) [36], improved GWO (IGWO) [37], opposition-
leading wolves during the search. based GWO (OBGWO) [38], and exploration-enhanced GWO
In the present paper, the benchmark test problems are (EEGWO) [39], and some recent algorithms such as Sine
used to analyze the performance of the proposed GLF–GWO Cosine Algorithm (SCA) [40], and moth-flame optimization
algorithm. In these test problems, our consideration of study (MFO) algorithm [41]. To compare the results, 25 runs are
is only objective function value, because metaheuristic algo- conducted of each algorithm for a fair comparison. The termi-
rithms treated the optimization problem as a black box. To nation criteria are taken as the same as decided by CEC. The
analyze the performance of search algorithms on real-world obtained results from various other comparative optimization

13
Engineering with Computers

Convergence graph for F1 12


Convergence graph for F2
9
10 10

10
10

Objective function values


8
Objective function values

10

8
10
7
10
6
10

6
10 4
10

5 2
10 10
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Iterations Iterations

8
Convergence graph for F3 Convergence graph for F5
10 2.7169
10
7
10
2.7167

Obj ecti ve functi on val ues


Objective function values

10
6
10

2.7165
5 10
10

4
10 10
2.7163

3
10
2.7161
10
2
10
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Iterations Iterations

Fig. 5  Convergence curves for CEC2014 benchmark problems

methods algorithms are reported in Table 8. In the table, represents the pack size of grey wolf, and d is the size of the
results are compared based on the mean objection function dimension.
value. In the table, only those functions are reported, where
the algorithms are able to enter to in a feasible region of the
problem. The table verifies the competitive search ability of 5 The GLF–GWO algorithm
the GLF–GWO algorithms as compared to other comparative for real‑engineering problems
optimization methods. The results obtained for constrained
problems also demonstrate the ability of enhancing the leading 5.1 Design of gear train
wolves through Levy-flight search strategy.
This unconstrained optimization problem was introduced by
4.7 Computation complexity of the proposed GLF– [43]. This problem is a discrete case study with four decision
GWO algorithm variables. In this problem, the objective is to determine the
optimum number of tooth for gears of a train to optimize the
The computational complexity of the GLF–GWO depends gear ratio [43]. The discrete components of decision variable
on the pack size, problem dimension and maximum number parameters are handled by rounding them to the nearest inte-
of iterations. Therefore, the complexity of the GLF–GWO ger. In the mathematical form, the problem is stated as follows:
algorithm can be easily calculated by analyzing the steps of
( )2
algorithms which will be O(T(n ⋅ d)) , in terms of big-O nota- 1 𝜂 𝜂
tion. Here, T represents the maximum number of iterations, n Min f 1 (x) = − B C , (24)
6.931 𝜂A 𝜂D

13
Engineering with Computers

Convergence graph for F6 Convergence graph for F8


2.788 2.97
10 10
Objective function values

Objective function values


2.786
10
2.95
10

2.784
10

2.93
10
2.782
10

2.78 2.91
10 10
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Iterations Iterations
Convergence graph for F10
Convergence graph for F9
1050
3.5
10

Objective function values


Objective function values

3.4
1000 10

3.3
10

950
3.2
10

3.1
10
900
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Iterations Iterations

Convergence graph for F12 Convergence graph for F13

3.081
10
Objective function values

Objective function values

3.115
10

3.08
10

3.114
10
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Iterations Iterations

Convergence graph for F16 8


Convergence graph for F17
10
3.2053
10
7
3.2052 10
10
Objective function values

Objective function values

3.2051 6
10 10

3.205
10 5
10
3.2049
10
4
10
3.2048
10

3
10
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Iterations Iterations

Fig. 6  Convergence curves for CEC2014 benchmark problems

13
Engineering with Computers

8
Convergence graph for F20 8
Convergence graph for F21
10 10

7 7
10 10
Objective function values

Objective function values


6 6
10 10

5 5
10 10

4 4
10 10

3 3
10 10
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Iterations Iterations

Convergence graph for F22


Convergence graph for F24
3.48
10

3.46 3.42
10 10
Objective function values

Objective function values


3.44
10

3.42
10

3.4 3.41
10 10

3.38
10

3.36
10
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Iterations Iterations

Convergence graph for F25 7


Convergence graph for F30
10

3.436
10

6
10
Objective function values

Objective function values

3.434
10

5
10

3.432
10
4
10

3.43
10
3
10
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Iterations Iterations

Fig. 7  Convergence curves for CEC2014 benchmark problems

s.t. 12 ≤ 𝜂A , 𝜂B , 𝜂C , 𝜂D ≤ 60.
of GWO such as modified GWO (modGWO) [36], improved
(25) GWO (IGWO) [37], opposition-based GWO (OBGWO)
30 runs for this problem with 300 function evaluations are [38], and exploration-enhanced GWO (EEGWO) [39], and
executed to obtain the solution of this problem. To compare some recent algorithms such as sine cosine algorithm (SCA)
the results of the GLF–GWO, the results of various other [40], moth-flame optimization (MFO) algorithm [41] are
optimization methods such as classical GWO [5], variants also presented in Table 9. In the same table, various other

13
Engineering with Computers

Table 6  Comparative of Function Algorithm Med Min Max Avg SD


objective function values
between constrained GWO and g01 GWO − 11.9761 − 14.9976 − 7.0662 − 11.2995 2.1315
constrained GLF–GWO
GLF–GWO − 15 − 15 − 14.9999 − 15 0
g02 GWO − 0.7409 − 0.8033 − 0.6209 − 0.7367 0.0462
GLF–GWO − 0.7459 − 0.8033 − 0.6275 − 0.7294 0.0544
g03 GWO 0 − 1.0005 0 − 0.3192 0.4543
GLF–GWO − 1.0004 − 1.0005 − 0.0006 − 0.7386 0.4048
g04 GWO − 30,665.3569 − 30,665.4946 − 30,665.2306 − 30,665.3717 0.0861
GLF–GWO − 30,665.3528 − 30,665.5203 − 30,665.0825 − 30,665.3389 0.1134
g06 GWO − 6951.7243 − 7950.9625 − 6874.8432 − 7060.6732 336.4714
GLF–GWO − 6958.9155 − 6961.4784 − 6954.4886 − 6958.7341 2.4637
g07 GWO 29.6348 25.2089 136.2955 37.8603 29.4342
GLF–GWO 24.6342 24.3851 25.6385 24.7221 0.2884
g08 GWO − 0.0958 − 0.0958 − 0.0958 − 0.0958 0.0000
GLF–GWO − 0.0958 − 0.0958 − 0.0958 − 0.0958 0.0000
g09 GWO 683.2727 680.7987 711.5577 685.3255 6.5667
GLF–GWO 680.8655 680.6538 683.3862 681.0990 0.7173
g10 GWO 7560.2631 7098.5460 8302.7828 7571.2017 374.1874
GLF–GWO 8345.6277 7729.9603 8554.3989 8276.2365 199.8099
g11 GWO 0.7499 0.7499 0.7570 0.7502 0.0014
GLF–GWO 0.7499 0.7499 0.9998 0.7669 0.0542
g12 GWO − 1 − 1 − 1 − 1 0
GLF–GWO − 1 − 1 − 1 − 1 0
g13 GWO 0.9999 0.9797 1.0538 1.0029 0.0203
GLF–GWO 0.9997 0.9527 2.3466 1.1903 0.4293
g14 GWO − 41.6151 − 46.5387 − 37.4392 − 41.8264 2.4808
GLF–GWO − 41.2324 − 46.7926 − 38.1941 − 41.9672 2.4774
g15 GWO 968.9376 961.7108 972.3102 968.2300 4.1666
GLF–GWO 966.1207 961.7157 971.8903 965.7727 3.3921
g16 GWO − 1.8376 − 1.9038 − 1.7431 − 1.8436 0.0490
GLF–GWO − 1.8422 − 1.9045 − 1.6776 − 1.8346 0.0664
g18 GWO − 0.8570 − 0.8659 − 0.4990 − 0.7981 0.1124
GLF–GWO − 0.8622 − 0.8660 − 0.6569 − 0.8233 0.0796
g19 GWO 35.5424 33.4655 66.3717 38.5536 8.0204
GLF–GWO 39.6384 33.2874 82.7696 43.0767 11.1777
g23 GWO − 0.0009 − 0.0009 − 0.0009 − 0.0009 NA
GLF–GWO − 0.0437 − 0.0651 809.3461 269.7458 467.3076
g24 GWO − 5.5079 − 5.5080 − 2 − 5.3676 0.7016
GLF–GWO − 5.5026 − 5.5080 − 3 − 5.2834 0.6213

The better results are highlighted in bold

Table 7  Statistical conclusions Function Conclusion Function Conclusion Function Conclusion Function Conclusion
obtained by applying Wilcoxon
test on CEC 2006 problems g01 + g07 + g13 = g19 −
g02 − g08 + g14 = g20 ×
g03 + g09 + g15 + g21 ×
g04 = g10 − g16 = g22 ×
g05 × g11 = g17 × g23 =
g06 + g12 = g18 = g24 −

13
Engineering with Computers

Table 8  Comparison of objective function values on CEC 2006 problems


Function PSO modGWO OBGWO IGWO EEGWO MFO SCA GWO GLF–GWO

g01 − 5.840 − 11.209 − 11.842 − 13.003 − 11.827 − 12.440 − 5.798 − 11.300 − 15.000
g02 − 0.746 − 0.743 − 0.783 − 0.742 − 0.545 − 1.502 − 1.364 − 0.737 − 0.729
g03 − 0.332 − 0.429 − 0.194 − 0.997 − 0.972 0.000 0.000 − 0.319 − 0.739
g04 − 30,593.961 − 30,665.177 − 30,665.250 − 30,664.524 − 30,661.612 − 30,553.748 − 30,609.250 − 30,665.372 − 30,665.339
g06 − 8307.003 − 11,700.005 − 14,545.430 − 6955.063 − 6953.716 − 7045.380 − 29,658.775 − 7060.673 − 6958.734
g07 186.959 46.995 28.877 − 2648.067 − 6306.258 − 981.876 56.876 37.860 24.722
g08 − 0.096 2.175 − 0.096 − 0.096 − 0.096 − 0.090 27.908 − 0.096 − 0.096
g09 680.637 685.077 686.008 735.107 841.479 682.025 688.589 685.326 681.099
g10 8133.462 7581.981 7905.542 7971.016 8578.560 11,027.554 9641.809 7571.202 8276.237
g11 0.800 0.750 0.754 0.763 0.770 0.990 0.750 0.750 0.767
g12 − 1.000 − 1.000 − 1.000 − 1.000 − 1.000 − 0.959 − 1.000 − 1.000 − 1.000
g13 0.627 0.430 − 0.810 0.411 − 1.000 0.645 − 1.000 1.003 1.190
g14 − 22.055 − 41.790 − 41.408 − 42.123 − 1.000 − 40.762 − 1.000 − 41.826 − 41.967
g15 689.719 803.868 765.466 885.155 772.762 886.154 − 1.000 968.230 965.773
g16 − 1.827 − 1.902 − 1.902 − 1.900 − 1.865 − 1.804 − 1.622 − 1.844 − 1.835
g18 − 0.862 − 0.774 − 0.737 − 0.634 − 0.498 − 0.805 − 0.707 − 0.798 − 0.823
g19 82.370 38.493 34.494 36.207 38.733 51.779 168.600 38.554 43.077
g23 73.354 72.596 34.494 0.000 38.733 95.697 0.000 − 0.001 269.746
g24 − 5.333 − 5.508 − 5.508 − 5.508 − 5.503 − 5.270 − 5.297 − 5.368 − 5.283

13
Engineering with Computers

Table 9  Results comparison for gear train design problem Table 10  Comparison results for FM-design problem
Algorithm 𝜂A 𝜂B 𝜂C 𝜂D f1,min Algorithm Min Avg Max SD

GLF–GWO 49 19 16 43 2.7009E−12 GLF–GWO 1.22E−04 1.12E+01 2.15E+01 5.91E+00


GWO 53 20 13 34 2.3078E−11 Classical GWO 8.42E+00 1.74E+01 2.51E+01 5.13E+00
modGWO 53 20 13 34 2.3078E−11 modGWO 3.00E−02 1.72E+01 2.52E+01 6.50E+00
EEGWO 51 24 13 37 4.4363E−04 EEGWO 2.85E+01 2.99E+01 3.02E+01 2.82E−01
IGWO 58 29 15 52 2.3576E−09 IGWO 8.51E+00 1.87E+01 2.52E+01 5.61E+00
OBGWO 39 12 15 32 2.3576E−09 OBGWO 2.98E+01 2.98E+01 2.98E+01 1.08E−14
SCA 50 12 36 60 7.8022E−08 SCA 1.16E+01 1.58E+01 2.20E+01 3.82E+00
MFO 52 30 13 52 2.3576E−09 MFO 1.09E+01 2.24E+01 2.87E+01 4.31E+00
MBA 50 14 17 33 1.3620E−09 CPSOH 3.45E+00 2.71E+01 4.25E+01 6.06E+01
ABC 49 19 16 44 1.0742E−05 G-CMA-ES 3.33E+00 3.88E+01 5.51E+01 1.68E+01
ALM 41 15 13 33 2.4070E−08
The better results are highlighted in bold

methods (MBA [44], artificial bee colony (ABC) [45], and (MFO) algorithm [41] are also employed on this problem
augmented lagrange multiplier (ALM) [46]) are also men- with the same parameter setting. To compare the results
tioned which are applied in the literature to solve the same with other algorithms which are employed in the literature
problem. The comparison of results ensures the better search CPSOH [47, 48] and G-CMA-ES [48, 49] are considered.
efficiency of the proposed GLF–GWO algorithm to deter- Table 10 clearly favors the better search efficiency of pro-
mine the number of tooth of a gear train. posed GLF–GWO algorithm.

5.3 Design of three bar truss


5.2 Estimation of parameters
for frequency‑modulated (FM) sound waves The design of three bar truss problem [50] is a nonlinear
fractional programming problem. In this problem, the goal
The aim is this problem is the estimation of the decision is to achieve the minimum volume of the truss structure sub-
parameters of FM synthesizer. It is a six-dimensional prob- ject to stress constraints. Mathematically, the truss design
lem with the decision variables y1 , y2 , y3 , y4 , y5, and y6 . The problem is stated as follows:
mathematical expression for this problem can be presented
� √ �
as follows: Min f 3 (x) = 2 2x1 + x2 × l, (30)

100
( )2 ( )
Min f 2 (X) = X(t) − X0 (t) X = x1 , x2 , x3 , x4 , x5 , x6 , √
P − 𝜎 ≤ 0,
t=1 2x1 + x2
(26) s.t. g1 (x) = √ (31)

− 6.40 ≤ xi ≤ 6.35 for all


2x12 + 2x1 x2
s.t. (27)
i = 1, 2, 3, 4.
The expressions for the estimated sound and the target
P − 𝜎 ≤ 0,
x2
sound waves are given as g2 (x) = √
(32)
( ( ( ))) 2
2x1 + 2x1 x2
X(t) = x1 sin x2 t𝜙 + x3 sin x4 t𝜙 + x5 sin x6 t𝜙 , (28)

P − 𝜎 ≤ 0,
(29)
X0 (t) = x1 sin (5t𝜙 + 1.5 sin (4.8t𝜙 + 2 sin (4.9t𝜙))), 1
g3 (x) = √ (33)
respectively, where 𝜙 = 2𝜋∕100. x1 + 2x2
To obtain the solution of this problem, 30 independent

0 < x1 , x2 ≤ 1.
runs are conducted with 2 × 105 function evaluations and
the obtained solutions are presented in Table 10. To com- (34)
pare the results of the GLF–GWO, the classical GWO and Here l = 100 cm , P = 2 KN∕cm2 , 𝜎 = 2 KN∕cm2.
other variants of GWO such as modified GWO (modGWO) The obtained results using classical GWO and proposed
[36], improved GWO (IGWO) [37], opposition-based GLF–GWO are reported in Table 11. 5000 function evalu-
GWO (OBGWO) [38], and exploration-enhanced GWO ations which are the same as [50] are fixed to solve this
(EEGWO) [39], and some recent algorithms such as sine problem. In Table 11, the results are compared with variants
cosine algorithm (SCA) [40], moth-flame optimization of GWO such as modified GWO (modGWO) [36], improved

13
Engineering with Computers

Table 11  Results’ comparison for truss bar design problem of speed reducer weight with several constraints. Mathemati-
Algorithm Decision variable f3,min cally, this problem can be stated as follows:
x1 x2 ( )
Min f 4 (x) = 0.7854x1 x22 3.3333x32 + 14.9334x3 − 43.0934
GLF–GWO 0.788174 0.4096753 263.8969 ( ) ( )
− 1.508x1 x62 + x72 + 7.4777 x63 + x73
GWO 0.787823 0.4106738 263.8974 ( 2 )
+ 0.7854 x4 x6 + x5 x72
modGWO 0.7878452 0.4106108 263.8974 ( )
IGWO 0.8087516 0.3592847 264.6780 x = x1 , x2 , x3 , x4 , x5 , x6 , x7
( )
OBGWO 0.7880142 0.4101417 263.8963 = b, m, N, L1 , L2 , D1 , D2 , (35)
EEGWO 0.8087517 0.3592847 264.6780

− 1 ≤ 0,
SCA 0.78394 0.42219 263.9506 27
PSO 0.58959 0.20568 263.8994 s.t. g1 (x) = (36)
x1 x22 x3
MFO 0.78753 0.41150 263.8968
CS 0.78867 0.40902 263.9716
− 1 ≤ 0,
397.5
Ray and Saini [52] 0.7950 0.3950 264.300 g2 (x) = (37)
Tsai [53] 0.7880 0.4080 263.680 (infeasible)
x1 x22 x32

− 1 ≤ 0,
1.93x43
GWO (IGWO) [37], opposition-based GWO (OBGWO) g3 (x) = (38)
[38], and exploration-enhanced GWO (EEGWO) [39], and x2 x3 x64
some recent algorithms such as sine cosine algorithm (SCA)
[40], moth-flame optimization (MFO) algorithm [41]. In the
− 1 ≤ 0,
1.93x53
table, the results of various other studies [Cuckoo Search g4 (x) = (39)
(CS) algorithm [51], Ray and Saini [52], and Tsai [53]] are x2 x3 x74
also reported. The table ensures the better performance of
GLF–GWO algorithm in finding optima compared to other √ ( )2
reported state-of-the art algorithms. 1.69 × 107 +
745x4

− 1 ≤ 0, (40)
x2 x3
g5 (x) =
5.4 Design of speed reducer 110x63

To find the efficient design of speed reducer, the proposed √


GLF–GWO is applied in the present section. This problem is ( )2
745x5
1.57 × 108 +
− 1 ≤ 0,
well known and considered as a benchmark structural problem x2 x3 (41)
[54]. In this problem, seven decision parameters—“face width g6 (x) =
85x73
(b) ”, “module of teeth (m) ”, “number( of)teeth on pinion (N) ”,
“length of shaft I(between
) bearings L1  ”, “length
( ) of shaft II
− 1 ≤ 0,
between bearings L2  ”, “diameter of shaft I D1  ”, and “diam- x2 x3
eter of shaft II (D2 ) ” are involved, as shown in Fig. 8 [51].
g7 (x) = (42)
( The) 40
goal of this problem is to achieve the minimum weight f4 (x)

Fig. 8  Speed reducer

13
Engineering with Computers

Table 12  Results comparison for speed reducer design problem

Algorithm Decision variables f4,min


x1 x2 x3 x4 x5 x6 x7

GLF–GWO 3.5000091 0.7 17 7.3 7.8 3.3502335 5.2866856 2996.3580


GWO 3.5002806 0.7000143 17.0001190 7.3016273 7.8163428 3.3511793 5.2867749 2997.2209
modGWO 3.5003289 0.7 17.0005627 7.3077181 7.8167703 3.3506819 5.2868869 2997.2595
IGWO 3.5014149 0.7 17.0000068 7.3036995 7.8120779 3.3523205 5.2871168 2998.0156
OBGWO 3.5013169 0.7 17.0007249 7.3021514 7.8037586 3.3504974 5.2867804 2997.2261
EEGWO 3.2179299 0.7049508 17.1347655 7.6625212 8.2640180 3.6041139 5.4553788 3124.4688
SCA 3.5188919 0.7 17 7.3 8.3 3.3589858 5.3051908 3028.8657
MFO 3.5 0.7 17 7.3 7.8 3.350214 5.286683 2996.3482
Ray and Saini [52] 3.514185 0.700005 17 7.497343 7.8346 2.9018 5.0022 2732.9006 (infeasible)
Montes and Coello [55] 3.506163 0.700831 17 7.460181 7.962143 3.3629 5.3090 3025.0050
CS [51] 3.5015 0.7 17 7.6050 7.8181 3.3520 5.2875 3000.9810
Akhtar et al. [56] 3.506122 0.700006 17 7.549126 7.85933 3.365576 5.289773 3008.0800

The better results are highlighted in bold

The Levy-flight local search has proved its efficiency in


− 1 ≤ 0,
5x2
g8 (x) = (43) enhancing the leaders of pack. The proposed algorithm,
x1
which is the hybridization of Levy-flight search mecha-
nism and classical GWO is named as GLF–GWO in this
− 1 ≤ 0,
x1
g9 (x) = (44) paper. The Levy-flight search strategy locally explores the
12x2 promising regions around the leaders to provide compara-

2.6 ≤ x1 ≤ 3.6, 0.7 ≤ x2 ≤ 0.8, 17 ≤ x3


tively better guidance. The Levy-flight search strategy is
useful during the search when the wolf pack trapped in
≤ 28, 7.3 ≤ x4 ≤ 8.3, 7.8 ≤ x5 ≤ 8.3,
(45)
sub-optimal solutions. In GLF–GWO, the greedy selection
between the positions of wolves is employed to maintain
2.9 ≤ x6 ≤ 3.9, 5 ≤ x7 ≤ 5.5.. (46) the personal best memory of wolves. The greedy selection
also avoids the high diversity within the algorithm which
This problem has been solves using the same function
usually skips the true solutions while solving the problem.
evaluation as used in Ref. [51]. 30 runs have been performed
To validate the performance of proposed GLF–GWO algo-
for this problem of each algorithm and obtained best opti-
rithm, the standard and well-known unconstrained bench-
mum value is reported in Table 12. The table also presents
mark IEEE CEC 2014 and constrained benchmarks IEEE
a comparison of results obtained by some other improved
CEC 2006 are taken. The various performance metrics
versions of GWO such as modified GWO (modGWO) [36],
such as statistical and convergence analysis of the results
improved GWO (IGWO) [37], opposition-based GWO
show the better efficiency of the proposed algorithm as
(OBGWO) [38], and exploration-enhanced GWO (EEGWO)
unconstrained and constrained optimizer compared to
[39], and some recent algorithms such as sine cosine algo-
classical GWO. The comparison presented with other
rithm (SCA) [40], moth-flame optimization (MFO) algorithm
algorithms also shows the competitive ability of proposed
[41]. In Table 12, the obtained results from various studies
GLF–GWO algorithm.
(CS [51], Ray and Saini [52], Montes and Coello [55] and
In the future, we will implement the proposed algorithm
Akhtar et al. [56]) are also presented. The comparison pre-
to solve complex real-life optimization problems. Since
sented in the table clearly indicates the better efficacy of the
the GWO is inspired from the leading characteristics of
proposed GLF–GWO as compared to the other algorithms.
wolves, therefore, the leaders are the decisive part of the
algorithm. In the future, we will introduce other search
strategies and/or genetic operators for the leading hunters
6 Conclusions
to enhancing the guiding search ability in GWO.
The present work focuses on improving the leading search Acknowledgements  The first author is grateful for the financial support
ability of wolf pack in grey wolf optimizer (GWO), so that provided by Ministry of Human Resource and Development (MHRD),
the more efficient directions of search can be explored. Government of India (Grant no. MHR-02-41-113-429).

13
Engineering with Computers

References 23. Lu C, Gao L, Li X, Xiao S (2017) A hybrid multi-objective grey


wolf optimizer for dynamic scheduling in a real-world welding
industry. Eng Appl Artif Intell 57:61–79
1. Eberhart R, Kennedy J (1995) A new optimizer using particle
24. Heidari AA, Pahlavani P (2017) An efficient modified grey wolf
swarm theory. In: Micro machine and human science, 1995.
optimizer with Lévy flight for optimization tasks. Appl Soft
MHS’95, Proceedings of the sixth international symposium on.
Comput 60:115–134
IEEE, pp 39–43
25. Tawhid MA, Ali AF (2018) Multidirectional grey wolf opti-
2. Dorigo M (1992) Optimization, learning and natural algorithms.
mizer algorithm for solving global optimization problems. Int J
PhD Thesis, Politecnico di Milano
Comput Intell Appl 17(04):1850022
3. Karaboga D, Basturk B (2007) A powerful and efficient algorithm
26. Tu Q, Chen X, Liu X (2018) Multi-strategy ensemble grey wolf
for numerical function optimization: artificial bee colony (ABC)
optimizer and its application to feature selection. Appl Soft
algorithm. J Global Optim 39(3):459–471
Comput 76:16–30
4. Mirjalili S (2015) The ant lion optimizer. Adv Eng Softw
27. Singh D, Dhillon JS (2018) Ameliorated grey wolf optimization
83:80–98
for economic load dispatch problem. Energy 169:398–419
5. Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer.
28. Saxena A, Kumar R, Das S (2019) β-Chaotic map enabled grey
Adv Eng Softw 69:46–61
wolf optimizer. Appl Soft Comput 75:84–105
6. Wolpert DH, Macready WG (1995) No free lunch theorems for
29. Qais MH, Hasanien HM, Alghuwainem S (2018) Augmented
search, vol 10. Technical Report SFI-TR-95-02-010, Santa Fe
grey wolf optimizer for grid-connected PMSG-based wind
Institute
energy conversion systems. Appl Soft Comput 69:504–515
7. Madadi A, Motlagh MM (2014) Optimal control of DC motor
30. Gupta S, Deep K (2019) An efficient grey wolf optimizer with
using grey wolf optimizer algorithm. TJEAS J 4(4):373–379
opposition-based learning and chaotic local search for integer
8. Mirjalili S (2015) How effective is the grey wolf optimizer in
and mixed-integer optimization problems. Arab J Sci Eng. https​
training multi-layer perceptrons. Appl Intell 43(1):150–161
://doi.org/10.1007/s1336​9-019-03806​-w
9. Song X, Tang L, Zhao S, Zhang X, Li L, Huang J, Cai W (2015)
31. Muro C, Escobedo R, Spector L, Coppinger RP (2011) Wolf-
Grey wolf optimizer for parameter estimation in surface waves.
pack (Canis lupus) hunting strategies emerge from simple rules
Soil Dyn Earthq Eng 75:147–157
in computational simulations. Behav Proc 88(3):192–197
10. Sulaiman MH, Mustaffa Z, Mohamed MR, Aliman O (2015)
32. Yang XS (2010) Nature-inspired metaheuristic algorithms.
Using the gray wolf optimizer for solving optimal reactive power
Luniver Press, Frome
dispatch problem. Appl Soft Comput 32:286–292
33. Deb K (2000) An efficient constraint handling method
11. Guha D, Roy PK, Banerjee S (2016) Load frequency control
for genetic algorithms. Comput Methods Appl Mech Eng
of interconnected power system using grey wolf optimization.
186(2–4):311–338
Swarm Evolut Comput 27:97–115
34. Liang JJ, Qu BY, Suganthan PN (2013) Problem definitions and
12. Zhang S, Zhou Y, Li Z, Pan W (2016) Grey wolf optimizer for
evaluation criteria for the CEC 2014 special session and competi-
unmanned combat aerial vehicle path planning. Adv Eng Softw
tion on single objective real-parameter numerical optimization.
99:121–136
Computational Intelligence Laboratory, Zhengzhou University,
13. Kamboj VK, Bath SK, Dhillon JS (2016) Solution of non-convex
Zhengzhou China and Technical Report, Nanyang Technological
economic load dispatch problem using grey wolf optimizer. Neu-
University, Singapore
ral Comput Appl 27(5):1301–1316
35. Liang JJ, Runarsson TP, Mezura-Montes E, Clerc M, Suganthan
14. Storn R, Price K (1997) Differential evolution—a simple and effi-
PN, Coello CC, Deb K (2006) Problem definitions and evaluation
cient heuristic for global optimization over continuous spaces. J
criteria for the CEC 2006 special session on constrained real-
Global Optim 11(4):341–359
parameter optimization. J Appl Mech 41(8):8–31
15. Rashedi E, Nezamabadi-Pour H, Saryazdi S (2009) GSA: a gravi-
36. Mittal N, Singh U, Sohi BS (2016) Modified grey wolf optimizer
tational search algorithm. Inf Sci 179(13):2232–2248
for global engineering optimization. Appl Comput Intell Soft
16. Muangkote N, Sunat K, Chiewchanwattana S (2014) An improved
Comput 2016:8
grey wolf optimizer for training q-Gaussian radial basis func-
37. Long W, Liang X, Cai S, Jiao J, Zhang W (2017) A modified
tional-link nets. In: Computer science and engineering conference
augmented Lagrangian with improved grey wolf optimization
(ICSEC), 2014 international. IEEE, pp 209–214
to constrained optimization problems. Neural Comput Appl
17. El-Fergany AA, Hasanien HM (2015) Single and multi-objective
28(1):421–438
optimal power flow using grey wolf optimizer and differential evo-
38. Pradhan M, Roy PK, Pal T (2017) Oppositional based grey wolf
lution algorithms. Electr Power Compon Syst 43(13):1548–1559
optimization algorithm for economic dispatch problem of power
18. Jayabarathi T, Raghunathan T, Adarsh BR, Suganthan PN (2016)
system. Ain Shams Eng J 9(4):2015–2025
Economic dispatch using hybrid grey wolf optimizer. Energy
39. Long W, Jiao J, Liang X, Tang M (2018) An exploration-enhanced
111:630–641
grey wolf optimizer to solve high-dimensional numerical optimi-
19. Rodríguez L, Castillo O, Soria J, Melin P, Valdez F, Gonzalez
zation. Eng Appl Artif Intell 68:63–80
CI et al (2017) A fuzzy hierarchical operator in the grey wolf
40. Mirjalili S (2016) SCA: a sine cosine algorithm for solving opti-
optimizer algorithm. Appl Soft Comput 57:315–328
mization problems. Knowl Based Syst 96:120–133
20. Yang B, Zhang X, Yu T, Shu H, Fang Z (2017) Grouped grey
41. Mirjalili S (2015) Moth-flame optimization algorithm: a novel
wolf optimizer for maximum power point tracking of doubly-fed
nature-inspired heuristic paradigm. Knowl Based Syst 89:228–249
induction generator based wind turbine. Energy Convers Manag
42. Song X, Tang L, Lv X, Fang H, Gu H (2012) Application of par-
133:427–443
ticle swarm optimization to interpret Rayleigh wave dispersion
21. Tawhid MA, Ali AF (2017) A hybrid grey wolf optimizer and
curves. J Appl Geophys 84:1–13
genetic algorithm for minimizing potential energy function.
43. Sandgren E (1990) Nonlinear integer and discrete programming
Memet Comput 9(4):347–359
in mechanical design optimization. J Mech Des 112(2):223–229
22. Mirjalili S, Saremi S, Mirjalili SM, Coelho LDS (2016) Multi-
44. Sadollah A, Bahreininejad A, Eskandar H, Hamdi M (2013) Mine
objective grey wolf optimizer: a novel algorithm for multi-crite-
blast algorithm: a new population based algorithm for solving
rion optimization. Expert Syst Appl 47:106–119

13
Engineering with Computers

constrained engineering optimization problems. Appl Soft Com- 51. Gandomi AH, Yang XS, Alavi AH (2013) Cuckoo search algo-
put 13(5):2592–2612 rithm: a metaheuristic approach to solve structural optimization
45. Sharma TK, Pant M, Singh VP (2012) Improved local search in problems. Eng Comput 29(1):17–35
artificial bee colony using golden section search. arXiv preprint 52. Ray T, Saini P (2001) Engineering design optimization using a
arXiv​:1210.6128 swarm with an intelligent information sharing among individuals.
46. Kannan BK, Kramer SN (1994) An augmented Lagrange mul- Eng Optim 33(6):735–748
tiplier based method for mixed integer discrete continuous opti- 53. Belegundu AD, Arora JS (1985) A study of mathematical pro-
mization and its applications to mechanical design. J Mech Des gramming methods for structural optimization. Part I: theory. Int
116(2):405–411 J Numer Methods Eng 21(9):1583–1599
47. Liang JJ, Qin AK, Suganthan PN, Baskar S (2006) Comprehen- 54. Gandomi AH, Yang XS (2011) Benchmark problems in structural
sive learning particle swarm optimizer for global optimization of optimization. In: Koziel S, Yang X-S (eds) Computational optimi-
multimodal functions. IEEE Trans Evol Comput 10(3):281–295 zation, methods and algorithms. Springer, Berlin, pp 259–281
48. Van Laarhoven PJ, Aarts EH (1987) Simulated annealing. In: 55. Mezura-Montes E, Coello CC, Landa-Becerra R (2003) Engineer-
Aarts E, Lenstra JK (eds) Simulated annealing: theory and appli- ing optimization using simple evolutionary algorithm. In: Tools
cations. Springer, Dordrecht, pp 7–15 with artificial intelligence, 2003. Proceedings. 15th IEEE inter-
49. Auger A, Hansen N (2005) A restart CMA evolution strategy with national conference on. IEEE, pp 149–156
increasing population size. In: Evolutionary computation, 2005. 56. Akhtar S, Tai K, Ray T (2002) A socio-behavioural simula-
The 2005 IEEE Congress on. IEEE, vol 2, pp 1769–1776 tion model for engineering design optimization. Eng Optim
50. Nowcki H (1974) Optimization in pre-contract ship design. In: 34(4):341–354
Fujita Y, Lind K, Williams TJ (eds) Computer applications in the
automation of shipyard operation and ship design, vol 2. North- Publisher’s Note Springer Nature remains neutral with regard to
Holland. Elsevier, New York, pp 327–338 jurisdictional claims in published maps and institutional affiliations.

13

You might also like