You are on page 1of 18

Expert Systems With Applications 152 (2020) 113396

Contents lists available at ScienceDirect

Expert Systems With Applications


journal homepage: www.elsevier.com/locate/eswa

An aggregative learning gravitational search algorithm with


self-adaptive gravitational constants
Zhenyu Lei a, Shangce Gao a,∗, Shubham Gupta d, Jiujun Cheng b, Gang Yang c
a
Faculty of Engineering, University of Toyama, Toyama-shi 930-8555, Japan
b
Key Laboratory of Embedded System and Service Computing, Ministry of Education, Tongji University, Shanghai 200092, China
c
School of Information, Renmin University of China, Beijing, China
d
Research Institute for Mega Construction, Korea University, Seoul 02841, South Korea

a r t i c l e i n f o a b s t r a c t

Article history: The gravitational search algorithm (GSA) is a meta-heuristic algorithm based on the theory of Newtonian
Received 7 January 2020 gravity. This algorithm uses the gravitational forces among individuals to move their positions in order
Revised 13 March 2020
to find a solution to optimization problems. Many studies indicate that the GSA is an effective algorithm,
Accepted 17 March 2020
but in some cases, it still suffers from low search performance and premature convergence. To alleviate
Available online 18 March 2020
these issues of the GSA, an aggregative learning GSA called the ALGSA is proposed with a self-adaptive
Keywords: gravitational constant in which each individual possesses its own gravitational constant to improve the
Gravitational search algorithm search performance. The proposed algorithm is compared with some existing variants of the GSA on the
Gravitational constant IEEE CEC2017 benchmark test functions to validate its search performance. Moreover, the ALGSA is also
Elite individuals tested on neural network optimization to further verify its effectiveness. Finally, the time complexity of
Exploration and exploitation the ALGSA is analyzed to clarify its search performance.
Aggregative learning
Neural network learning © 2020 Elsevier Ltd. All rights reserved.

1. Introduction have solved these types of optimization problems. A large num-


ber of studies have shown that GSA has the potential to solve op-
Currently, various optimization problems are more complex timization problems, González et al. used the GSA to deal with
to solve such as function optimization (Doğan & Ölmez, 2015; echocardiogram recognition problems and has obtained satisfying
Storn, 1996), multiobjective optimization (Chang, 2015; De Cas- results (González, Valdez, Melin, & Prado-Arechiga, 2015). In the
tro & Timmis, 2002), dynamic optimization (Lee, Yeon, & Gho- Internet of Things, Dhumane et al. utilize GSA to extend the life-
vanloo, 2016; Mavrovouniotis, Li, & Yang, 2017), combinatorial time of the nodes. Dhumane and Prasad (2019). Li et al. proposed
optimization (Song, Huang, Yan, Xiong, & Min, 2016) and engi- a new FOPID-CGGSA controller for a pump turbine governing sys-
neering optimization problems (Askarzadeh, 2016; Mittal, Singh, tem and the new proposed controller is significantly better than
& Sohi, 2016), since the solutions are defined in a high- the traditional controllers (Li, Zhang, Lai, Zhou, & Xu, 2017). Oli-
dimensional space. Therefore, there are various evolutionary al- vas et al. also verified the effectiveness of GSA in control prob-
gorithm (EAs) (Antonio & Coello, 2017; Gao et al., 2019; Gong lems (Olivas, Valdez, Melin, Sombra, & Castillo, 2019). In this work,
et al., 2015; Li, Li, Tang, & Yao, 2015), such as particle swarm the type-2 fuzzy logic is used to dynamically adjust the parameter
optimization (PSO) (Roy, Mahapatra, & Dey, 2019; Wang & Kum- of the GSA. These studies have verified that GSA has gained suc-
basar, 2019), artificial bee colony (ABC) (Gu, Yu, & Hu, 2017), ant cess in some complex real-world problems. Therefore, by analyz-
colony optimization (ACO) (Gao, Wang, Cheng, Inazumi, & Tang, ing the capabilities of the GSA, its performance can be further im-
2016), genetic algorithm (GA) (Holland, 1992), differential evolu- proved to solve the global optimization problems more efficiently.
tion (DE) (Yu, Gao, Wang, & Todo, 2019) and gravitational search al- As population-based meta-heuristic algorithm, GSA possesses ex-
gorithm (GSA) (Rashedi, Nezamabadi-Pour, & Saryazdi, 2009), that ploration and exploitation abilities. The exploration ability of the
individuals helps to explore new search regions so that the issue of
the stagnation in the local optima can be avoided during the search

Corresponding authors process. The exploitation ability refers to the local search ability
E-mail addresses: m1871139@ems.u-toyama.ac.jp (Z. Lei), gaosc@eng.u- where the algorithm can perform a search within the neighbor-
toyama.ac.jp (S. Gao), sgupta@ma.iitr.ac.in, g.shubh93@gmail.com (S. Gupta), hood of the previously visited promising areas of the search space.
chengjj@tongji.edu.cn (J. Cheng), yanggang@ruc.edu.cn (G. Yang).

https://doi.org/10.1016/j.eswa.2020.113396
0957-4174/© 2020 Elsevier Ltd. All rights reserved.
2 Z. Lei, S. Gao and S. Gupta et al. / Expert Systems With Applications 152 (2020) 113396

Hence, the trade-off between exploration and exploitation plays a improve the performance of optimization algorithms by dynam-
key role in enhancing the search performance of algorithms. Some ically adjusting parameters (Valdez et al., 2014). Sombra, Valdez,
researchers adjust the population structure or algorithm parameter Melin, and Castillo (2013) uses fuzzy logic to adjust the alpha pa-
to balance exploration and exploitation. rameter of GSA to improve the performance (Sombra et al., 2013).
In meta-heuristic algorithms, the different population structures Li, Lin, Tseng, Tan, and Lim (2018) added a change factor to adjust
indicate the different interaction manners among individuals. A the alpha parameter according to the search condition (Li et al.,
common population structure is panmictic, in which all individu- 2018). Wang et al. (2020) utilized chaotic neural oscillators to gen-
als can interact with any other individual. Some researchers utilize erate different chaotic according to parameter setting to adjust
the distributed and cellular structure to improve the search perfor- gravitation constant (Wang et al., 2020). These studies indicate that
mance of meta-heuristic algorithms. In the distributed structure, adjusting gravitational constant can help GSA to balance explo-
the population is divided into several subpopulations. Every sub- ration and exploitation to improve search performance. These stud-
population evolves individually, and it exchanges information with ies also imply that the parameter can be adjusted with search in-
other subpopulations. In the cellular structure, the individual only formation.
interacts with its neighborhood. Alba and Dorronsoro (2005) pro- Although previous researchers have improved the performance
posed the cellular GA, where the individuals only interact with of the GSA using various operators, the search performance of the
their neighbors. Wang, Yu, Gao, Pan, and Yang (2019) proposed a GSA is still limited for complex optimization problems. In the GSA,
hierarchical GSA to guide the search for individuals. In this vari- the gravitational constant G plays an important role in balancing
ant of the GSA, individual interactions are performed in a three the exploration and exploitation. A large G represents that an in-
layer hierarchical structure (i.e., global layer, Kbest layer, and pop- dividual has a large step size to explore the search space. On the
ulation layer). Ji et al. (2019) adopts a scale-free network bee other hand, a small G indicates that an individual has a small step
colony algorithm to improve the search-ability. Giacobini, Preuss, size to exploit the available promising areas of the search space. If
and Tomassini (2006) was the first to adopt small-world topologies the original G is gradually reduced, it indicates that the exploration
to slow the convergence speed of DE. Various population structures process quickly fades such that individuals cannot cover the entire
are used to enhance the search performance of meta-heuristic al- solution space to find an optimal solution. Another method is that,
gorithms, and the results indicate that the population structure can an individual is attracted by the K best individuals, where K repre-
influence the search-ability of meta-heuristic algorithms. In addi- sents the size of Kbest. K decreases linearly from N to 1, where the
tion, various operators are utilized to balance exploration and ex- N is the size of the population, which also causes that gravitational
ploitation (e.g., mutation operators, and crossover operators). Some force of the individual to rapidly decrease. When an individual falls
researchers utilize various parameter tuning methods to balance into the local optimum, it cannot jump out of the optimum due to
exploration and exploitation in more appropriate way. The parame- its low gravitational force in the later search process. To alleviate
ter tuning manner methods include the deterministic strategy, the these issues, an aggregative learning GSA called the ALGSA is pro-
adaptive strategy and the self-adaptive strategy. In the determin- posed in this paper with a self-adaptive gravitational constant. The
istic strategy, the parameter is adjusted using some deterministic novel total gravitational force enhances the search performance by
rules. In the adaptive strategy, the feedback from the population improving the gravitational force of the individual. The novel to-
information is utilized to dynamically tune parameter. In the self- tal gravitational force utilizes the Kbest to build several gravita-
adaptive strategy, parameters are encoded into the individual and tional fields. After this process, these generated gravitational fields
they change according to the search conditions of individuals. Es- attract individuals to search the solution space. In the proposed
pecially, in many adaptive algorithms, each individual has its own ALGSA, each individual is attracted by K gravitational fields. Espe-
operators to balance exploration and exploitation through many cially, each individual adjusts its gravitational constant according to
adjustment strategies. its search condition in the proposed ALGSA. In the desired region,
With an aim of balancing the exploration and exploitation, the individual needs a stronger exploitation to find the optimal so-
many operators have been introduced in the literature for the lution. When an individual has fallen into the local optimum, it
GSA. A disruption operator has been used to update the po- needs a stronger exploration ability to jump out from the local op-
sition of an individual. In this operator, the heaviest individ- timum. Hence, each individual has its own gravitational constant
ual can influence the other individuals to explore or exploit the in the ALGSA, and this gravitational constant is adjusted according
search space (Sarafrazi, Nezamabadi-Pour, & Saryazdi, 2011). Two to the search condition of the individuals.
mutation operators have been introduced to prevent falling into The contributions and originality of this paper are summarized
the local optimal region by updating the velocity of individu- as follows: (1) An aggregative learning gravitational force is first
als (Nobahar, Nikusokhan, & Siarry, 2012). Chaotic operators have proposed to enhance the search performance of the original GSA.
been utilized as a chaotic local search method to improve the The aggregative learning strategy used a novel interaction manner
performance of the GSA (Gao, Vairappan, Wang, Cao, & Tang, among individuals to improve the search performance of GSA. The
2014). A crossover operator has been introduced to improve the Kbest individuals are first used to construct different gravitational
global exploration ability of the GSA and increase the convergence fields to attract individuals rather than directly attract individuals.
rate (Khatibinia & Khosravi, 2014). An escape operator adds the Experimental results also verified that the aggregative learning has
escape velocity for the individuals, that are far away from the the capability of making individuals escape from the local optima
promising areas with the aim to make them return to the heav- and improving the search performance. (2) A self-adaptive strategy
iest group (Güvenç & Katırcıoğlu, 2017). The Black Hole Kepler op- is introduced to adjust the gravitational constant of each individ-
erator, which was inspired from astrophysics, has been utilized to ual. Each individual individually adjusts its gravitational constant
improve the performance of the GSA (Doraghinejad & Nezamabadi- according to its search condition using the self-adaptive strategy. In
pour, 2014; Sarafrazi, Nezamabadi-pour, & Seydnejad, 2015). More- the self-adaptive strategy, the search information of the population
over, to solve binary and discrete optimization problems, the bi- is used to adjust G of each individual. Experimental results showed
nary GSA (Rashedi, Nezamabadi-Pour, & Saryazdi, 2010) and the that the self-adaptive gravitational constant is an effective way of
discrete GSA (Shamsudin et al., 2012) are proposed, respectively. improving the search performance. (3) ALGSA uses two parameters
Some researchers use different strategies to adjust the gravitational to control the self-adaptive gravitational. The discussion of param-
constant to improve the performance of GSA. Valdez, Melin, and eters is carried out to select a reasonable parameter setup. (4) The
Castillo (2014) verified that fuzzy logic is an effective strategy to aggregative learning gravitational force and self-adaptive strategy
Z. Lei, S. Gao and S. Gupta et al. / Expert Systems With Applications 152 (2020) 113396 3

Table 1
Nomenclature.

Symbol Description

Xi (t) The ith individual in population


xdi (t ) The position of the ith individual in the dth dimension
Mi (t) The mass of the ith individual in the tth iteration
fi (t) The fitness of objective function of the ith individual in the tth iteration
best(t) The best fitness of objective function of the ith individual in the tth iteration
worst(t) The worst fitness of objective function of the ith individual in the tth iteration
G(t) The gravitational constant of population in the tth iteration
Rij (t) The Euclidean distance between individuals Xi (t) and Xj (t)
Fidj (t ) The dth dimension gravitational force between individuals Xi (t) and Xj (t) in the t- iteration
 A small value to prevent a divide-by-zero error.
α A constant
t The current iteration number
T The maximum iteration number
N The size of population
K The size of Kbest
Kbest A set of the K top best individuals
adi (t ) The d-dimension acceleration of the ith individual in the tth iteration
vdi (t ) The d-dimension velocity of the ith individual in the tth iteration
nsti The success counter of the ith individual in the tth iteration
n f it The failure counter of the ith individual in the tth iteration
Gi (t) The gravitational constant of the ith individuals in the tth iteration
ri (t) The adjustment amplitude of gravitational constant of the ith individual in the tth iteration
Yi (j) The gravitational constant of the ith individual by the jth gravitational filed

are analyzed to verify their effectiveness. (5) The time complexity dth dimension is calculated by the following:
of the proposed algorithm is analyzed. Mi (t ) × M j (t ) d
This reminder of this paper is organized as follows. Table 1 lists Fidj (t ) = G(t ) (x j (t ) − xdi (t )) (4)
Ri j (t ) + ε
the nomenclatures of this paper. Section 2 introduces the original
GSA. Section 3 describes the issues of the GSA and the proposed where Rij (t) is the Euclidean distance between individuals Xi and
ALGSA. Section 4 provides the experimental result and compares Xj , and ε is a small value to prevent a divide-by-zero error. G(t)
them with those of other GSA variants. Section 5 discusses the pa- is a gravitational constant that exponentially decreases and is de-
rameter analysis, aggregative learning gravitational force analysis, scribed as follows:
self-adaptive gravitational constant and time complexity analysis.
G(t ) = G0 × e−α T
t
(5)
Section 6 provides the conclusions and future research directions.
where G0 is an initial value, α is a constant, and t and T are the
2. Gravitational search algorithm current iteration number and the maximum number of iterations,
respectively. For an individual Xi , the total gravitational force Fid (t )
The GSA is a population-based meta-heuristic algorithm in- is calculated as follows:
spired by the law of gravity. Each individual evolves its position 
Fid (t ) = randi Fidj (t ) (6)
according to the gravitational force among individuals. The higher
j∈Kbest, j=i
mass individuals attract the lower mass individuals to move to-
wards them due to the gravitational force.The position of an indi- where Kbest indicates a set of the first K best individuals with the
vidual represents a solution in the search space. During the search best fitness value. K is the individual number of Kbest. randi is a
process, the lower mass individuals repeatedly update their posi- uniformly distributed random value in the interval (0,1). Hence, the
tions to find better positions. acceleration adi (t ) of individual Xi in the dth dimension, at time t
In the GSA, the mass of an individual is calculated by using an is calculated as follow:
objective function value based on the position of the individual. Fid (t )
An individual having higher mass compared to the other individ- adi (t ) = (7)
Mi (t )
uals is an elite individual for the population. To start the search
process for the GSA, the initial population of N individuals is ran- Therefore, the velocity vdi (t + 1 ) and position xdi (t + 1 ) of individ-
domly generated in the search space. Let the position of the ith ual Xi is updated in next iteration, respectively as follows:
individual be represented by the following: vdi (t + 1 ) = randi × vdi (t ) + adi (t ) (8)
Xi = ( x1i , x2i , x3i , . . . , xdi ) i ∈ 1, 2, 3, . . . , N (1)
xdi (t + 1 ) = xdi (t ) + vdi (t + 1 ) (9)
where xdi indicates the position of the ith individual in the dth di-
mension. The mass for the ith individual is described as : where randi is a uniformly distributed random number in the in-
fi (t ) − worst (t ) terval (0,1).
mi (t ) = (2) The main steps of the GSA are the following and the implemen-
best (t ) − worst (t )
tation of the original GSA is shown in Algorithm 1:
mi (t )
Mi (t ) = N (3) (1) Initialization of the population according to random uniform
j=1 m j (t )
distribution;
where fi (t) represents the fitness value of the ith individual in the (2) The boundary constraints of each individual is managed;
tth iteration. best(t) and worst(t) represent the best and worst fit- (3) Evaluate the fitness of individuals by the objective function;
ness values of the current population in the tth iteration, respec- (4) Calculate the mass of individuals by Eqs. (3) and (2);
tively. The gravitational force between individuals Xi and Xj in the (5) Update the gravitational constant G(t) by Eq. (5);
4 Z. Lei, S. Gao and S. Gupta et al. / Expert Systems With Applications 152 (2020) 113396

small accelerations exploit the promising regions to find the op-


Algorithm 1: GSA.
timal solution. A large G indicates a strong interaction among indi-
Input: Parameters N, d, limit, p, G0 , n f , ns, NFEs, MNFEs viduals, while a small G indicates weak interaction among individ-
Output: The optimal solution uals. Therefore, an individual needs different Gs due to the differ-
1 Initialization: Randomly generation of a population ent search conditions. However, the exponential decays of G causes
{X1 , X2 , ..., Xn } low exploration and the chance of getting trapped in the local op-
2 while NFEs < MNFEs do tima. Due to the effects of these issues, individuals cannot effec-
3 for i = 1 to N do tively explore the search space and are unable to escape from the
4 The boundary constraint of individual Xi ; local optima. In addition, the individual is only attracted by the K
5 end best individuals to find a solution. However, the number of best
6 for i = 1 to N do individuals (K) linearly decreases, which implies that the gravita-
7 Evolution the fitness of individual Xi ; tional force and exploration ability of the individuals gradually de-
8 end crease. Hence, the individuals cannot escape from the local optima
9 for i = 1 to N do in the later search process. Therefore, it is necessary to strengthen
10 Calculate the mass of individual Xi by Eqs. (2) and (2) ; the exploration ability of individuals in the early search phase and
11 end strengthen their exploitation ability in the later search phase. To
12 for i = 1 to N do resolve these issues, a self-adaptive strategy is proposed to adjust
13 Update the gravitational constant G(t ) by Eq (5); the gravitational constant to balance exploration and exploitation.
14 end And a aggregative learning strategy is proposed to change interac-
15 for i = 1 to N do tion manner among individuals to improve the gravitational force.
16 Calculate the gravitation force of individuals by Eqs.
(4) and (6); 3.2. Self-adaptive gravitational constant
17 end
18 for i = 1 to N do Self-adaptive gravitational constant is introduced to better bal-
19 Calculate the velocity and position of of individual Xi ance the exploration and exploitation during the search. In the
by Eq. (8) and~(9); original GSA, the gravitational constant exponentially decreases
20 end which indicates that the exploration ability gradually weakens and
21 FES = FES + N; exploitation ability gradually improves. However, the individuals
22 end trapped in the local optima need a large exploration ability to es-
cape from this optima. The convergence towards the elite individ-
ual should also be improved to speed up the convergence rate.
(6) Calculate the gravitation force of individuals by Eqs. (4) and Moreover, the gravitational constant of each individual uniformly
(6); changes in the original GSA. To improve the search performance
(7) Calculate the acceleration of individuals by Eq. (7); of the GSA, the gravitational constant of each individual should in-
(8) Update velocity and position of individuals by Eqs. (8) and dividually change according to its search condition in the search
(9), respectively; process. Some individuals need a large gravitational constant to ex-
(9) Step 2 to 8 is repeated until the termination criterion is sat- plore the search space and other individuals exploit the local opti-
isfied. mal region with a small gravitational constant. Therefore, it is nec-
essary for each individual to have their own gravitational constant
3. Aggregative learning gravitational search algorithm to balance the exploration and exploitation. The original gravita-
tional constant exponentially decreases which may be the reason
3.1. Issues of GSA for individuals getting trapped in the local optimum later in the
search process.
Although the original GSA is very efficient at finding the opti- When the fitness of an individual is worse or unchanged, it
mal solution for optimization problems, in some cases, its search- indicates that the individual can locate the local optimal region.
ability is restricted, and it easily gets trapped in the local optima. Therefore individuals need large gravitational forces to escape from
One of the reasons for the degraded performance may be the expo- this region. When the fitness of an individual improves, it indicates
nential decay of the gravitational force among the individuals. The that the individual has a higher chance of converging towards the
reduced gravitational force leads to the insufficient being explo- optimal solution. Thus, the value of gravitational constant G should
ration performed by individuals during the entire search process. be increased to improve the gravitational forces of individuals. The
Meanwhile, the search step size of an individual exponentially de- illustration of the self-adaptive gravitational constant is shown in
scends, which can cause premature convergence. In addition, GSA Fig. 1. In this figure, triangle, pentagram, circle, and arrowhead rep-
has drawbacks of that low exploration and exploitation abilities. resent the local optimal region, the global optimal region, the in-
The exploration ability ensures that the individuals search those dividual, and the gravitational force, respectively. Case 1 represents
promising regions of the search space where the probability of the that if an individual trapped into local optimal, it needs to increase
presence of an optimal solution is high. The exploitation ability en- the force to escape from the local optima by increasing the gravi-
sures that the individuals extract the required information that is tational constant. Case 2 implies that an individual quickly moves
available in the discovered promising areas of the search space. to the global optimal region with increasing gravitational constant.
Consequently, the trade-off between exploration and exploitation Therefore, in order to estimate the situation of individual Xi , nsi
abilities can enhance the permanence of the GSA during the search and nfi are used as the counters to record the fitness information
process. and both are initially set to zero.
In the GSA, the gravitational constant G plays an important role 
nst−1 + 1, i f fi (t ) < fi (t − 1 )
in balancing the exploration and exploitation abilities. The value nsti = i (10)
0, otherwise
of G influences the acceleration of the search process.The individu- 
als Individuals with large accelerations explore the entire search n fit−1 + 1, i f fi (t ) > fi (t − 1 )
n fit = (11)
space to find the promising regions, while the individuals with 0, otherwise
Z. Lei, S. Gao and S. Gupta et al. / Expert Systems With Applications 152 (2020) 113396 5

Fig. 1. The illustration of self-adaptive gravitational constant.

For individual Xi in iteration t, threshold θ and probability p are field the distribution of individuals. Here, ri (t) represents the ad-
implemented to control the update of the gravitational constant. If justed amplitude of Gi (t) and is defined in Eq. (13):
nsti exceeds θ , the gravitational constant is increased to accelerate 
1
the individual’s convergence towards the elite individual. Similarly, , c<1
ri (t ) = c (13)
if n fit exceeds θ , the gravitational constant is increased to improve c, otherwise
the ability to avoid the local optima during the search. The self-
adaptive Gi (t) of the individual is defined in Eq. (12): where c is the ratio of the acceleration and gravitational constant
 and is defined in Eq. (14). To ensure that the individual can obtain
Gi (t ) · ri (t ), i f Counter > θ & rand < p a large gravitational constant, when c less than 1, then ri (t) is set
Gi (t ) = (12)
Gi (t ), otherwise to the reciprocal of c, otherwise, ri (t) is set to c.
  
where Counter contains two counters nsti and n fit , and rand is a  |ai (t )| 

c = log (14)
uniformly distributed random number in the interval (0,1). When Gi (t ) 
Counter exceeds θ and rand exceeds p, the individual needs a large
gravitational constant to improve its exploration ability, and the where ai (t) and Gi (t) are the acceleration and the gravitational con-
gravitational constant is multiplied by ri (t). Otherwise, the gravi- stant of individual Xi in iteration t, respectively.
tational constant is set to the gravitational constant of the original The original and self-adaptive gravitational constants are shown
GSA. in left sub-figure and right sub-figure of Fig. 2, respectively. In the
The gravitational force between two individuals is related to left sub-figure, the gravitational constant exponentially decreased.
the mass and distance between them in Eq. (4). Therefore, the ac- In the right sub-figure, the gravitational constant changed accord-
celeration and gravitational constant of the individual are utilized ing to the search condition of individuals based on original grav-
to adjust its Gi (t). The ratio of acceleration and gravitational con- itational constant, where the orange and blue line represent the
stant of the individual can be seen as the ratio of the mass and adjustment of Case 1 and Case 2, respectively. When an individual
distant multiplied by the ratio of the new gravitational field and gets worse search condition, it means that the individual is a fail-
the individual. Therefore, the ratio represents both the gravitational ure. In contrast, when an individual gets better search condition, it
constant difference between an individual and a new gravitational means that the individual is a success. The self-adaptive strategy

Fig. 2. Illustration of original and self-adaptive gravitational constant.


6 Z. Lei, S. Gao and S. Gupta et al. / Expert Systems With Applications 152 (2020) 113396

directly makes the gravitational constant improve in a certain iter- itational fields change as Kbest varies and each gravitational field
ation and recover the original exponential change in the next itera- is differently structured by different numbers of best individuals
tion. Therefore, the self-adaptive strategy changes the current grav- in a certain iteration. For example, Kbest has the j best individu-
itational constant of an individual according to the current search als in iteration t. j gravitational fields are built by the j best in-
state of an individual. This strategy does not affect the previous dividuals to attract other individuals, while Kbest possesses the i
search tendencies of individuals. best individuals in iteration t + 1, and the i best individuals con-
strue i gravitational fields. The first j best individuals structure the
Remark 1. The individuals exist different search conditions dur- jth gravitational field, while the first j-1 best individuals construct
ing the search process. Hence, each individual is provided G and the (j-1)th gravitational field. The best individual is repeatedly uti-
it adjusts its G according to its search condition. Different from the lized to create the different gravitational fields, and the individual
original GSA, each individual can search the solution space accord- enhances the learning of the best individual. Therefore, an individ-
ing to its search condition. The opportunity of finding better solu- ual is able to obtain a larger gravitational force to explore a wide
tions is improved. Every individual can individually balance explo- area of the search space. However, the best individual produces
ration and exploitation. different influence in the different gravitational fields, because the
Remark 2. The fitness of the objective function is used to estimate generated gravitational field has different number of individuals.
the search condition of individuals. Two counters nsi and nfi are In addition, as K decreases, the aggregative learning gradually ap-
proposed to record the estimated results. If an individual can not proaches the Kbest attractive learning. When the gravitational field
get a better solution in several iterations, the individual trapped is created by an best individual, aggregative learning becomes Kbest
into the local optima with a high probability. If an individual is attractive learning. Therefore, aggregative learning is a generaliza-
improved in several iterations, it may move to the global optima. tion of Kbest attractive learning.
The total gravitational force Fi of individual Xi is composed of
Remark 3. A novel reused information strategy is used to adjust several component forces Y(j). The component forces is composed
G of every individual. The historical evolution (i.e., displacement of several subforces. The subforces is calculated by the j best in-
information of individuals) and difference information (i.e., G dif- dividuals, where j changes from 1 to Kbest. The j best individuals
ference between individuals and gravitational fields) of individuals generate small gravitational fields to attract individual Xi ; hence,
are first used to adjust G. the gravitational force of an individual is the sum force of the j
best individuals. However, each individual possesses its own grav-
3.3. Aggregative learning gravitational force itational constant. To consider the effect of each individual, the
gravitational constant is set as the mean value of the j best individ-
In addition, aggregative learning is proposed to improve the uals in the new small gravitational field. The aggregative learning
gravitational force of individuals. In the original GSA, the total force is described as follows:
gravitational force of individuals is calculated by Eqs. (4) and (6). 
K
The way that the Kbest individuals directly attract other individu- Fid (t ) = Yi ( j ) (15)
als is called Kbest attraction learning. However, K and G gradually j=1
decrease which results in the rapid decay of gravitational force.
As the exploration ability rapidly decreases, the more promising j 
Gk (t ) Mi (t ) 
j

regions are unexplored in the search process. Meanwhile, in this Yi ( j ) = k=1


rk Mk (t ) xdk (t ) − xdi (t )
case, the gravitational force is unable to provide a sufficient force j Ri,k (t ) + ε
k=1,k= j
for avoiding falling into the local optima.
(16)
In this paper, the Kbest is skillfully used to structure several
new gravitational fields rather than directly attracts other individ- where kbest linearly decreases from N to 1, Ri,k is the Euclidean
uals, and the total gravitational force of individual is improved by distance between individuals Xi and Xk , and ε is a small value that
the aggregative learning method. The new total gravitational force prevents the divide-by-zero error. Gk is the gravitational constant
is proposed to improve the search performance and is named ag- of individual Xk . rk is a random number in the interval (0,1).
gregative learning. These generated gravitational fields attract indi- The gravitational force of aggregative learning and Kbest attrac-
viduals to search the solution space. Moreover, the number of grav- tion learning are given in Figs. 3 and 4, respectively. The left sub-

Fig. 3. The gravitational force of aggregative learning.


Z. Lei, S. Gao and S. Gupta et al. / Expert Systems With Applications 152 (2020) 113396 7

Fig. 4. The gravitational force of Kbest attraction learning.

figure exhibits the gravitational force of a Kbest individual, and gravitational field to attract individual, where the j varies from 1
the right sub-figure shows the gravitational force of a non-Kbest to K ; 10) Calculate the gravitational force of each individual by
individual. In the Fig. 3, the Kbest includes three best individu- Eqs. (15) and (16). The implementation of the ALGSA is shown in
als (i.e. X1 , X2 , and X3 ). It means that three gravitational field is Algorithm 2.
constructed. The circles represent individuals, the bigger areas, the
better fitness. From Fig. 3, it can be observed that three gravita-
tional fields are built to attract other individuals when Kbest in- Algorithm 2: ALGSA.
cludes the three individuals X1 , X2 , and X3 in the aggregative learn- Input: Parameters N, d, limit, p, G0 , n f , ns, NFEs, MNFEs
ing. First, the red gravitational field is built by individuals X1 , X2 , Output: The optimal solution
and X3 . Second, the green gravitational field is created by individ- 1 Initialization: Randomly generation of a population
uals X1 and X2 . Third, the blue gravitational field is construed by {X1 , X2 , ..., Xn }
individual X1 . These different gravitational fields attract individu- 2 while NFEs < MNFEs do
als to search solution space. In the left sub-figure of Fig. 3, the F1 3 for i = 1 to N do
and F2 have same direction and different values. The reason is that 4 The boundary constraint of individual Xi ;
X3 as best individuals construct the red field and do not construct 5 end
the green field which causes that the gravitational constants of two 6 for i = 1 to N do
gravitational fields are different. On the other hand, in Fig. 4, all 7 Evolution the fitness of individual Xi ;
individuals build a gravitational field, the Kbest individuals directly 8 end
attract other individuals to search for an optimal solution in Kbest 9 for i = 1 to N do
attraction learning. The individuals are attracted by three gravita- 10 if F itit < F itit−1 then
tional fields, while the individual is directly attracted by individ- 11 nsti = nst−1 + 1;n fit = 0;
i
uals X1 , X2 , and X3 in Kbest attraction learning. Therefore, an indi- 12 else
vidual can obtain a large gravitational force to explore the solution 13 n fit = n fit−1 + 1;nsti = 0;
space. Especially, the gravitational force of a Kbest individual ob-
14 end
tains three sub-gravitational forces in aggregative learning, and the
15 end
gravitational force of a Kbest individual only obtains two subgrav-
16 for i = 1 to N do
itational forces in Kbest attraction learning. Hence, the Kbest indi-
17 Calculate the mass of individual Xi by Eqs. (3) and (2) ;
viduals can obtain larger gravitational forces to prevent premature
18 end
convergence.
19 for i = 1 to N do
Remark 4. The aggregative learning innovatively constructs the 20 Calculate the gravitational constant of individual Xi by
gravitational fields to improve the gravitational force of individu- Eqs. (5), (13) and (12);
als for preventing premature convergence. When the gravitational 21 end
field includes one best individuals, aggregative learning becomes 22 for i = 1 to N do
Kbest attractive learning. Therefore, aggregative learning is a gener- 23 for j = 1 to kbest do
alization of Kbest attractive learning. 24 Calculate the sum gravitational force of the
individual according to Eqs. (15) and (16);
The primary procedures of ALGSA are described as follow: 1) 25 end
Initialization of the population according to random uniform distri- 26 end
bution; 2) The boundary constraints of each individual is managed; 27 for i = 1 to N do
3) Evaluate the fitness of each individual by the objective func- 28 Calculate the velocity and position of of individual Xi
tion; 4) Record the evolution situation of each individual obtained by Eq. (8) and (9);
by Eqs. (10) and (11); 5) Calculate the mass of each individual by 29 end
Eqs. (3) and (2); 6) Calculate the gravitational constant of each in- 30 FES = FES + N;
dividual with the help of Eqs. (5) and (12); 7) Select the K best 31 end
individuals in population and sort it; 8) Select the j best individu-
als in K best individuals; 9) The j best individuals build the small
8 Z. Lei, S. Gao and S. Gupta et al. / Expert Systems With Applications 152 (2020) 113396

3.4. Advantage of ALGSA implemented using the MATLAB software on a PC with a 3.00 GHz
Intel(R) Core(TM) i5-7400 CPU and 8GB of RAM.
Some improving GSA works such as PSOGSA, MGSA, GGSA,
DNLGSA, CGSA, IGSA, and HGSA have done. However, these pre- 4.3. Performance evaluation criteria
vious studies still suffer from low search performance. ALGSA uses
different methods to improve the search performance. The advan- To measure the performance of the ALGSA, three different per-
tages of ALGSA are shown by following comparison. formance evaluation criteria are adopted, which are defined as fol-
Compared with DNLGSA, ALGSA directly utilizes evolution in- lows:
formation rather than population diversity information to adjust G.
The evolution information better reflects the search process than (1) Non-parametric statistical test: The Wilcoxon rank-sum test
population diversity information. In addition, the individuals learn is implemented to detect whether the difference between
more search knowledge from the Kbest individuals by using ag- every pair of algorithms is significant at a significance level
gregative learning to prevent premature convergence. of α = 0.05. The sign “+” denotes that the control algorithm
Compared with IGSA, ALGSA uses counters to record search is significantly better than its competitor, the sign “ ≈ ” in-
conditions and to adjust G according to the search condition. And dicates that the control algorithm is similar to its competitor
the self-adaptive constants of ALGSA are more easily implemented. and the sign “−” implies that the control algorithm is signif-
Compared with HGSA, ALGSA used Kbest individuals to construct icantly worse than its competitor. Therefore, “w/t/l” repre-
gravitational fields to attract individuals trapped into local optima, sents that the control algorithm is significantly better, simi-
which can provide a large gravitational force to improve the op- lar or significantly worse than its competitor.
portunity of escaping from the local optima for individuals. (2) Convergence curve graph: The convergence curves are uti-
Compared with CGSA, MGSA and PSOGSA, ALGSA innovatively lized to compare the convergence rate of the algorithms,
provides different Gs for different individuals. In search process, which illustrates the history of the current best solution in
different individuals have different search conditions. Some indi- each iteration. The X-axis means the number of function
viduals need strong exploration to explore solution space, others evaluations; the Y-axis represents the average best fitness as
need strong exploitation to find a better solution in promising re- so far.
gions. Therefore,each individual adjusts its G to balance exploration (3) Box-and-whisker diagrams: The box-and-whisker diagrams
and exploitation. It is a flexible and effective way to improve the illustrate the quality of the solutions. The distance between
performance of ALGSA. the minimum and maximum value indicates the distribu-
tion of the solution. A shorter distance implies that the stan-
4. Experimental results ans analysis dard deviation of the solution is small and the search per-
formance is stable. Moreover, the lowest altitude determines
In this section, to verify the performance of the proposed the solution quality of the algorithm. A lower altitude in-
algorithm, the comparison process is carried out on the IEEE dicates that the algorithm can find a better solution. The
CEC2017 (Awad, Ali, Liang, Qu, & Suganthan, 2016) benchmark upper block line, upper blue line, red line, lower blue line,
functions. First, the CEC2017 benchmark functions are described. lower block line, and red “+” represent maximum, first quar-
Second, the experimental setup of each algorithm is defined. Third, tile, median, third quartile, minimum, and extreme value, re-
the comparison between the ALGSA and other GSA variants is car- spectively.
ried out. Fourth, the comparison between the ALGSA and other
meta-heuristic algorithms is performed. Fifth, real-world problems 4.4. Comparison between ALGSA and other variants of GSA
are used to test the performance of ALGSA. Finally, the neural net-
work training is implemented to further verify the performance of To validate the performance of the proposed ALGSA, seven dif-
ALGSA. ferent variants of the GSA, namely, GSA (Rashedi et al., 2009),
PSOGSA (Mirjalili & Hashim, 2010), MGSA (Gu & Pan, 2013),
4.1. Benchmark functions GGSA (Mirjalili & Lewis, 2014), DNLGSA (Zhang et al., 2016),
CGSA (Song, Gao, Yu, Sun, & Todo, 2017), IGSA (Ji et al., 2017),
The IEEE CEC2017 (Awad et al., 2016) benchmark test suite is and HGSA (Wang et al., 2019) are used to perform a comparison.
used to examine the performance of the proposed ALGSA. This The original GSA is utilized to detect the improvement of The AL-
benchmark set consists of 30 problems with varying difficulty lev- GSA. The PSOGSA combines the exploitation ability of PSO with
els. It should be noted that the F2 function has been excluded be- the exploration ability of the GSA to improve search performance.
cause it has unstable behavior especially for higher dimensions. The MGSA provides a memory ability for individuals, and during
Thus, out of the twenty-nine benchmark functions, there are two the search, these individuals can remember the local optimal so-
unimodal functions (F1 and F3), seven simple multimodal func- lution and global optimal solution to improve their search accu-
tions (F4–F10), ten hybrid functions (F11–F20) and ten composi- racy. The GGSA uses the best individual to alleviate this issue of
tion functions (F21–F30). These benchmark functions include many slow exploitation during the search process. The DNLGSA utilizes
characteristics of real-world problems, and therefore, in the litera- a dynamic neighborhood learning strategy and incorporates the lo-
ture, several algorithms are tested on these benchmark problems. cal and global neighborhood topology to enhance the exploration
and to balance exploration and exploitation abilities. The CGSA
4.2. Experimental setup uses a memory-based strategy incorporation scheme to exploit a
small search region. The IGSA utilizes a self-adaptive mechanism
For all experiments, the common setting of the parameters for to adjust the gravitational constant in order to balance exploita-
all the algorithms is fixed as follow: The number of individuals, tion and exploration. The HGSA uses the hierarchical interaction
i.e., the population size N, is set to 100; the maximum number of among three-layer to alleviate the problem of premature conver-
function evaluations (MNFEs) is set to 104 ∗ D, where D is the di- gence. The values of the parameters used in these algorithms are
mension of the benchmark functions and set as 30; and the search listed in Table 2.
range is [-100,100]. For each function, each algorithm individually The twenty-nine benchmark functions with 30 dimensions are
runs 30 times to obtain the statistical results. All algorithms are utilized to test the performance. The experimental results obtained
Z. Lei, S. Gao and S. Gupta et al. / Expert Systems With Applications 152 (2020) 113396 9

Table 2
Parameter settings of ALGSA and other variants of GSA.

Algorithm Parameters

ALGSA Gi (0 ) = 100, α = 20, limit = 2, p = 0.5


GSA G0 = 100, α = 20
HGSA G0 = 100, L = 100, w1 (t ) = 1 − t 6 /T 6 , w2 (t ) = t 6 /T 6
MGSA G0 = 100, α = 20, wi (t ) = 0.5, w2 (t ) = 0.5
GGSA G0 = 100, α = 20, w1 (t ) = 2 − 2t 3 /T 3 , w2 (t ) = 2t 3 /T 3
CGSA G0 = 100, α = 20, LP = 50
IGSA αmean (0 ) = 20, σ = 0.3, p = 0.1, k = 6
PSOGSA G0 = 100, α = 20, w1 (t ) = 0.5, w2 (t ) = 1.5
DNLGSA G0 = 100, α = 20, w1 (t ) = 0.5 − 0.5t 1/6 /T 1/6 , w2 (t ) = 1.5t 1/6 /T 1/6 , k = 10, gm = 5

by the ALGSA and other GSA variants are shown in Table 3. From tion functions, the search performance of the ALGSA is significantly
this table, it can be observed that the ALGSA has the best mean better than its competitors on eight composition functions. These
and standard deviation on 19 benchmark functions (i.e., F1, F4–F13, problems are designed by combining the features of unimodal and
F17, F20 and F22–F30) among the nine algorithms. Moreover, the multimodal functions. Therefore, the enhanced performance on the
PSOGSA, MGSA, GGSA, DNLGSA, CGSA, IGSA, and HGSA have better these problems shows that the proposed strategies of the ALGSA
means and standard deviations on 1, 0, 1, 0, 0, 2, and 6 bench- are successful at establishing a comparatively better balance be-
mark functions, respectively compared to the ALGSA. The number tween exploration and exploitation.
of problems on which the ALGSA has significantly outperformed In addition, the box-and-whisker diagrams and convergence
PSOGSA, MGSA, GGSA, DNLGSA, CGSA, IGSA, and HGSA are 26, graphs for the benchmark function are respectively provided in
25, 24, 27, 29, 24, and 20, respectively. A comparison of the AL- Figs. 5 and 6 to illustrate the difference between the ALGSA and
GSA with the original GSA shows that the performance of the AL- other GSA variants which are used for comparison in the paper.
GSA is significantly better on all 29 benchmark functions. The AL- From Fig. 5, it can be noticed that the ALGSA possesses the low-
GSA outperforms all its competitors on the F1 benchmark function est altitude and shortest distant on F5, F8, F16, F23, F24 and F27,
and outperforms most competitors on the F3 benchmark function, which confirms its better performance and stronger robustness.
which indicates that the ALGSA improves the search performance Fig. 6 describes the convergence process of the whole search pro-
of the individuals in terms of the exploitation strength since uni- cess. The horizontal axis represents the number of functional eval-
modal problems can be used to evaluate the exploitation ability of uations and the vertical axis depicts the average values of the ob-
algorithms. The ALGSA outperforms all its competitors on the F4- jective functional values obtained over 30 independent trails of the
F10 benchmark functions, which implies that the ALGSA enhances algorithms. From Fig. 6, we can observe that except for the HGSA,
the exploration skills of the individuals because these problems othe variants of the GSA quickly converge towards the local op-
are multimodal and contain large numbers of local optima, which timal region in the early search process. The HGSA displays its
therefore make them suitable for evaluating the exploration ability strong exploration ability, but it still fails to provide better solu-
of algorithms. For the ten hybrid functions, the search performance tions than the ALGSA. However, the ALGSA shows different a con-
is improved on five functions by the ALGSA and for ten composi- vergence trajectory since it explores the desired region in the early

Fig. 5. The box-and-whisker diagrams of optimal solutions obtained by nine kinds of GSAs on F5,F8,F16,F23,F24,F27.
10 Z. Lei, S. Gao and S. Gupta et al. / Expert Systems With Applications 152 (2020) 113396

Table 3
Experimental results of ALGSA and GSA variants on CEC2017 benchmark functions.

Algorithm F1 F3 F4 F5 F6

ALGSA 3.54E+02 ± 2.24E+02 4.58E+04 ± 9.94E+03 4.27E+02 ± 7.07E−01 5.18E+02 ± 4.11E+00 6.00E+02 ± 5.30E−07
GSA 2.00E+03 ± 1.03E+03 + 8.30E+04 ± 4.33E+03 + 5.42E+02 ± 1.59E+01 + 7.26E+02 ± 2.01E+01 + 6.50E+02 ± 2.75E+00 +
HGSA 2.68E+03 ± 2.50E+03 + 4.36E+04 ± 5.49E+03 ≈ 5.19E+02 ± 2.63E+00 + 6.53E+02 ± 1.28E+01 + 6.08E+02 ± 4.54E+00 +
MGSA 4.63E+03 ± 4.30E+03 + 4.23E+04 ± 1.28E+04 ≈ 5.33E+02 ± 5.86E+01 + 6.35E+02 ± 3.01E+01 + 6.27E+02 ± 8.09E+00 +
GGSA 2.18E+03 ± 1.12E+03 + 6.02E+04 ± 6.73E+03 + 5.33E+02 ± 2.30E+01 + 6.11E+02 ± 1.22E+01 + 6.09E+02 ± 5.29E+00 +
CGSA 1.82E+03 ± 7.79E+02 + 8.41E+04 ± 6.80E+03 + 5.36E+02 ± 1.95E+01 + 7.18E+02 ± 1.76E+01 + 6.51E+02 ± 4.18E+00 +
IGSA 1.88E+03 ± 1.37E+03 + 6.04E+04 ± 7.02E+03 + 5.22E+02 ± 2.10E+01 + 5.42E+02 ± 8.18E+00 + 6.00E+02 ± 1.29E−02 ≈
PSOGSA 4.12E+03 ± 3.26E+03 + 3.56E+03 ± 7.87E+03 − 1.04E+03 ± 5.05E+02 + 6.46E+02 ± 3.40E+01 + 6.24E+02 ± 8.94E+00 +
DNLGSA 1.22E+05 ± 1.81E+05 + 1.49E+04 ± 1.24E+04 − 7.11E+02 ± 1.46E+02 + 6.50E+02 ± 3.67E+01 + 6.41E+02 ± 7.86E+00 +

F7 F8 F9 F10 F11

ALGSA 7.42E+02 ± 2.67E+00 8.16E+02 ± 3.17E+00 9.00E+02 ± 1.57E−11 1.87E+03 ± 3.17E+02 1.14E+03 ± 1.69E+01
GSA 7.87E+02 ± 1.19E+01 + 9.51E+02 ± 1.31E+01 + 2.93E+03 ± 3.92E+02 + 4.87E+03 ± 4.34E+02 + 1.45E+03 ± 8.92E+01 +
HGSA 7.41E+02 ± 3.01E+00 − 9.00E+02 ± 9.03E+00 + 9.00E+02 ± 9.67E−14 + 4.21E+03 ± 2.93E+02 + 1.20E+03 ± 2.98E+01 +
MGSA 8.38E+02 ± 2.65E+01 + 9.08E+02 ± 2.29E+01 + 3.41E+03 ± 8.50E+02 + 4.92E+03 ± 8.11E+02 + 1.23E+03 ± 4.53E+01 +
GGSA 7.37E+02 ± 1.49E+00 − 8.88E+02 ± 9.79E+00 + 9.00E+02 ± 0.00E+00 + 4.38E+03 ± 3.89E+02 + 1.25E+03 ± 3.23E+01 +
CGSA 7.84E+02 ± 9.91E+00 + 9.52E+02 ± 7.97E+00 + 2.87E+03 ± 3.34E+02 + 4.94E+03 ± 4.11E+02 + 1.47E+03 ± 1.06E+02 +
IGSA 7.43E+02 ± 5.60E+00 ≈ 8.33E+02 ± 7.72E+00 + 9.00E+02 ± 2.11E−14 − 3.58E+03 ± 4.60E+02 + 1.28E+03 ± 7.44E+01 +
PSOGSA 9.72E+02 ± 6.32E+01 + 9.36E+02 ± 3.25E+01 + 4.54E+03 ± 1.67E+03 + 4.70E+03 ± 6.23E+02 + 1.49E+03 ± 3.14E+02 +
DNLGSA 9.86E+02 ± 6.78E+01 + 9.16E+02 ± 2.85E+01 + 3.93E+03 ± 1.10E+03 + 4.96E+03 ± 8.84E+02 + 1.51E+03 ± 2.44E+02 +

F12 F13 F14 15 F16

ALGSA 1.27E+04 ± 4.97E+03 1.61E+03 ± 4.28E+02 1.51E+03 ± 4.18E+01 3.10E+03 ± 1.79E+03 2.33E+03 ± 2.37E+02
GSA 1.03E+07 ± 1.93E+07 + 3.10E+04 ± 6.45E+03 + 4.74E+05 ± 1.31E+05 + 1.17E+04 ± 1.93E+03 + 3.18E+03 ± 2.84E+02 +
HGSA 1.29E+05 ± 8.15E+04 + 1.46E+04 ± 5.32E+03 + 6.72E+03 ± 3.05E+03 − 2.20E+03 ± 7.21E+02 ≈ 2.83E+03 ± 2.32E+02 +
MGSA 5.27E+05 ± 5.78E+05 + 2.81E+05 ± 1.42E+06 + 1.87E+04 ± 3.80E+04 ≈ 6.08E+03 ± 4.72E+03 + 2.83E+03 ± 2.89E+02 +
GGSA 4.83E+05 ± 2.11E+05 + 1.87E+04 ± 4.70E+03 + 1.96E+05 ± 7.59E+04 + 4.12E+03 ± 1.57E+03 + 2.88E+03 ± 3.22E+02 +
CGSA 1.46E+07 ± 2.66E+07 + 2.83E+04 ± 5.26E+03 + 4.84E+05 ± 1.19E+05 + 1.15E+04 ± 1.93E+03 + 3.20E+03 ± 2.90E+02 +
IGSA 1.40E+06 ± 7.32E+05 + 3.06E+04 ± 7.97E+03 + 1.96E+05 ± 1.37E+05 + 1.31E+04 ± 3.65E+03 + 2.71E+03 ± 2.16E+02 +
PSOGSA 6.00E+07 ± 1.48E+08 + 2.39E+07 ± 7.46E+07 + 9.87E+04 ± 2.69E+05 − 5.31E+05 ± 2.82E+06 + 3.05E+03 ± 4.59E+02 +
DNLGSA 1.58E+08 ± 2.63E+08 + 1.62E+06 ± 8.72E+06 + 6.07E+04 ± 1.02E+05 + 1.29E+04 ± 1.02E+04 + 2.74E+03 ± 3.13E+02 +

F17 F18 F19 F20 F21

ALGSA 1.99E+03 ± 1.71E+02 7.21E+04 ± 2.52E+04 8.26E+03 ± 5.42E+03 2.19E+03 ± 4.94E+01 2.32E+03 ± 6.99E+00
GSA 2.90E+03 ± 1.70E+02 + 3.20E+05 ± 1.76E+05 + 1.42E+04 ± 5.13E+03 + 3.03E+03 ± 2.36E+02 + 2.56E+03 ± 1.95E+01 +
HGSA 2.77E+03 ± 1.99E+02 + 6.16E+04 ± 1.47E+04 − 5.42E+03 ± 1.25E+03 ≈ 2.86E+03 ± 2.24E+02 + 2.41E+03 ± 5.90E+01 +
MGSA 2.37E+03 ± 2.07E+02 + 1.44E+05 ± 1.28E+05 + 9.28E+03 ± 6.23E+03 + 2.67E+03 ± 1.86E+02 + 2.44E+03 ± 3.14E+01 +
GGSA 2.67E+03 ± 2.06E+02 + 1.68E+05 ± 7.28E+04 + 5.93E+03 ± 1.46E+03 ≈ 2.82E+03 ± 1.64E+02 + 2.41E+03 ± 2.11E+01 +
CGSA 2.83E+03 ± 1.92E+02 + 2.78E+05 ± 1.01E+05 + 1.34E+04 ± 4.79E+03 + 3.01E+03 ± 1.88E+02 + 2.57E+03 ± 2.71E+01 +
IGSA 2.22E+03 ± 2.14E+02 + 3.81E+05 ± 3.85E+05 + 1.57E+04 ± 8.10E+03 + 2.41E+03 ± 1.70E+02 + 2.35E+03 ± 6.54E+00 +
PSOGSA 2.27E+03 ± 2.29E+02 + 3.07E+05 ± 1.01E+06 ≈ 1.43E+04 ± 1.33E+04 + 2.57E+03 ± 2.35E+02 + 2.43E+03 ± 3.53E+01 +
DNLGSA 2.30E+03 ± 2.31E+02 + 1.88E+05 ± 1.86E+05 + 1.72E+04 ± 5.34E+04 ≈ 2.72E+03 ± 2.15E+02 + 2.43E+03 ± 3.73E+01 +

F22 F23 F24 F25 F26

ALGSA 2.30E+03 ± 6.30E−06 2.65E+03 ± 1.70E+01 2.76E+03 ± 3.96E+01 2.89E+03 ± 6.19E−02 2.87E+03 ± 4.66E+01
GSA 6.39E+03 ± 1.69E+03 + 3.56E+03 ± 1.23E+02 + 3.29E+03 ± 5.57E+01 + 2.93E+03 ± 1.22E+01 + 6.86E+03 ± 8.95E+02 +
HGSA 2.30E+03 ± 3.91E−09 + 2.76E+03 ± 1.33E+02 + 2.92E+03 ± 3.58E+01 + 2.89E+03 ± 7.59E+00 + 2.85E+03 ± 5.07E+01 +
MGSA 4.19E+03 ± 2.22E+03 ≈ 3.00E+03 ± 8.12E+01 + 3.27E+03 ± 1.12E+02 + 2.92E+03 ± 1.66E+01 + 5.56E+03 ± 1.63E+03 +
GGSA 2.30E+03 ± 2.05E−10 − 2.86E+03 ± 3.94E+01 + 2.91E+03 ± 3.70E+01 + 2.93E+03 ± 1.03E+01 + 2.94E+03 ± 5.28E+02 -
CGSA 5.89E+03 ± 2.08E+03 + 3.62E+03 ± 1.06E+02 + 3.29E+03 ± 5.28E+01 + 2.94E+03 ± 8.49E+00 + 6.73E+03 ± 6.61E+02 +
IGSA 2.30E+03 ± 0.00E+00 − 2.74E+03 ± 2.26E+01 + 2.82E+03 ± 2.24E+01 + 2.92E+03 ± 9.79E+00 + 2.83E+03 ± 4.66E+01 -
PSOGSA 4.68E+03 ± 1.91E+03 + 2.93E+03 ± 8.75E+01 + 3.21E+03 ± 1.43E+02 + 3.02E+03 ± 7.53E+01 + 5.70E+03 ± 1.30E+03 +
DNLGSA 4.50E+03 ± 2.32E+03 + 3.00E+03 ± 8.74E+01 + 3.18E+03 ± 7.29E+01 + 3.00E+03 ± 4.61E+01 + 5.98E+03 ± 1.26E+03 +

F27 F28 F29 F30 w/t/l

ALGSA 3.23E+03 ± 1.49E+01 3.17E+03 ± 5.32E+01 3.40E+03 ± 6.98E+01 7.38E+03 ± 6.60E+02


GSA 4.67E+03 ± 3.21E+02 + 3.31E+03 ± 4.94E+01 + 4.71E+03 ± 2.10E+02 + 1.70E+05 ± 1.24E+05 + 29/ 0/ 0
HGSA 3.25E+03 ± 2.08E+01 + 3.11E+03 ± 2.82E+01 ≈ 4.05E+03 ± 1.88E+02 + 1.10E+04 ± 2.60E+03 + 20/ 6/ 3
MGSA 3.52E+03 ± 1.19E+02 + 3.21E+03 ± 7.43E+01 + 4.12E+03 ± 3.03E+02 + 7.95E+04 ± 1.81E+05 + 25/ 4/ 0
GGSA 3.39E+03 ± 3.57E+01 + 3.23E+03 ± 3.28E+01 + 4.25E+03 ± 2.30E+02 + 4.39E+04 ± 1.91E+04 + 24/ 2/ 3
CGSA 4.55E+03 ± 2.73E+02 + 3.32E+03 ± 4.92E+01 + 4.71E+03 ± 1.91E+02 + 1.67E+05 ± 9.29E+04 + 29/ 0/ 0
IGSA 3.37E+03 ± 6.87E+01 + 3.26E+03 ± 3.48E+01 + 4.03E+03 ± 2.22E+02 + 3.34E+05 ± 3.68E+05 + 24/ 1/ 4
PSOGSA 3.52E+03 ± 1.36E+02 + 3.52E+03 ± 2.00E+02 + 4.24E+03 ± 3.80E+02 + 3.39E+06 ± 1.42E+07 + 26/ 2/ 1
DNLGSA 3.43E+03 ± 1.50E+02 + 3.44E+03 ± 9.79E+01 + 4.47E+03 ± 3.17E+02 + 3.60E+06 ± 6.27E+06 + 27/ 1/ 1

stage and exploits the desired region in the late stage. Therefore, 4.5. Comparison between ALGSA and other heuristic algorithms
the ALGSA tries to establish a better transition from the explo-
ration phase to the exploitation phase to avoid the drawbacks of To further verify the search performance of the ALGSA,
the original GSA and to enhance the search efficiency of the indi- an external comparison is adopted between ALGSA and other
viduals. The above results verify that the ALGSA has better search meta-heuristic algorithms such as the DE (Yu et al., 2019),
performance due to its self-adaptive gravitational constant and ag- SCA (Mirjalili, 2016), ABC (Gu et al., 2017), WOA (Mirjalili
gregative learning strategy. & Lewis, 2016), GWO (Mirjalili, Mirjalili, & Lewis, 2014), and
Z. Lei, S. Gao and S. Gupta et al. / Expert Systems With Applications 152 (2020) 113396 11

Fig. 6. The convergence graphs of average best-so-far solutions obtained by nine kinds of GSAs on F5,F8,F16,F23,F24,F27.

Table 4 the ALGSA is significantly better than other meta-heuristic algo-


Parameter settings of the heuristic algorithms.
rithms on the F4–F10 benchmark functions, which implies that the
Algorithms Parameters ALGSA is able to obtain better search performance on the simple
DE F = 0.9, CR = 0.9 multimodal functions. For the hybrid and composition functions,
SCA α=2 ALGSA outperforms other meta-heuristic algorithms on most of the
WOA α linearly decreases from 2 to 0 test functions. Therefore, the ALGSA exhibits strong and compe-
GWO α linearly decreases from 2 to 0 tition search performance on benchmark functions compared to
NCS r = 0.99
other meta-heuristic algorithms.
Moreover, the box-and-whisker diagrams and convergence
curve graphs are shown in Figs. 7 and 8 to illustrate the dif-
NCS (Tang, Yang, & Yao, 2016). The DE is a global optimization ference between the ALGSA and other meta-heuristic algorithms.
algorithm with different mutation strategies and crossover strate- From Fig. 7, it can be observer that the ALGSA possesses lowest
gies. It possesses fewer control parameters, strong robustness and altitude and shortest distant on the F5, F8, F10, F23, F27, and F29
easy implants. The SCA is a population-based optimization algo- compared to heuristic algorithms, which represents the ALGSA has
rithm, it uses a mathematical model based on sine and cosine stronger stability and better search performance. From Fig. 8, the
functions to update the position of individuals for exploration and ALGSA get superior optimal solutions on the F5, F8, F10 and F23,
exploitation of the search space. The ABC is an intelligent algo- compared to the DE, SCA, ABC, WOA, GWO, and NCS. Moreover, the
rithm inspired behavior of honey bee swarm. The WOA and GWO convergence trajectory has proved that the ALGSA possesses com-
are also population-based meta-heuristic algorithms. The WOA is paratively better the ability to escape the local region. The above
inspired by the social behavior of humpback whales. The GWO results verify the competitive performance of the ALGSA as com-
mimics the leadership hierarchy and hunting mechanism of grey pared to the other meta-heuristic algorithms.
wolves in nature to search for global optima. NCS models indi-
4.6. Real-world optimization problems
viduals, search process as the probability distributions and im-
proves the differences among the probability distributions to pro-
To evaluate the performance of ALGSA, all variants of GSA are
mote negatively correlated search behaviors. Their parameters set-
tested on 22 IEEE CEC2011 real-world problems (Das & Sugan-
tings are presented in Table 4.
than, 2010).These problems are:
The twenty-nine benchmark functions listed in IEEE CEC2017
with 30 dimensions are used to test the performance. The experi- • F1 : A parameter estimation for Frequency-Modulated Sound
mental results between the ALGSA and other meta-heuristic algo- Waves;
rithms are shown in Table 5. From this table, it can be observe that • F2 : A Lennard-Jones potential problem;
the ALGSA has the better means and standard deviations on the • F3 : A bifunctional catalyst blend control problem;
22 benchmark functions (i.e., F1, F4–F13, F17, F20 and F22–F30). • F4 : A stirred tank reactor control problem;
The number of problems on which the ALGSA has significantly out- • F5 and F6 : Two Tersoff potential minimization problems;
performed the DE, SCA, ABC, WOA, GWO, and NCS are 24, 28, 29, • F7 : A radar polyphase code design problem;
29, 26, and 27, respectively. For the unimodal functions, the ALGSA • F8 : A transmission network expansion problem;
outperforms most meta-heuristic algorithms, which indicates the • F9 : A transmission pricing problem;
ALGSA possesses stronger performance than other meta-heuristic • F10 : An antenna array design problem;
algorithms on the unimodal functions. The search performance of • F11.1 and F11.2 : Two dynamic economic dispatch problems;
12 Z. Lei, S. Gao and S. Gupta et al. / Expert Systems With Applications 152 (2020) 113396

Table 5
Experimental results of ALGSA, DE, SCA, ABC, WOA, GWO, and NCS on CEC2017 benchmark functions.

Algorithm F1 F3 F4 F5 F6

ALGSA 1.60E+03 ± 1.77E+03 4.58E+04 ± 1.64E+04 4.27E+02 ± 2.89E+01 5.18E+02 ± 4.01E+00 6.00E+02 ± 3.57E−07
DE 1.36E+09 ± 5.40E+08 + 7.94E+04 ± 1.13E+04 + 6.30E+02 ± 2.94E+01 + 7.53E+02 ± 1.05E+01 + 6.23E+02 ± 5.00E+00 +
SCA 2.68E+03 ± 1.77E+09 + 3.50E+04 ± 6.31E+03 − 1.40E+03 ± 2.74E+02 + 7.71E+02 ± 2.17E+01 + 6.49E+02 ± 5.34E+00 +
ABC 4.63E+03 ± 7.74E+04 + 1.03E+05 ± 1.09E+04 + 5.19E+02 ± 2.79E+00 + 7.18E+02 ± 9.44E+00 + 6.00E+02 ± 6.69E−03 +
WOA 2.18E+03 ± 1.73E+06 + 1.60E+05 ± 8.12E+04 + 5.45E+02 ± 3.70E+01 + 7.64E+02 ± 6.16E+01 + 6.67E+02 ± 1.12E+01 +
GWO 1.82E+03 ± 8.91E+08 + 2.85E+04 ± 9.70E+03 − 5.70E+02 ± 4.98E+01 + 5.92E+02 ± 2.63E+01 + 6.04E+02 ± 2.33E+00 +
NCS 1.88E+03 ± 1.22E+07 + 6.39E+04 ± 1.14E+04 + 5.15E+02 ± 1.81E+01 + 8.37E+02 ± 2.98E+01 + 6.78E+02 ± 6.01E+00 +

F7 F8 F9 F10 F11

ALGSA 7.42E+02 ± 3.04E+00 8.16E+02 ± 3.34E+00 9.00E+02 ± 6.33E−14 1.87E+03 ± 2.54E+02 1.14E+03 ± 3.11E+01
DE 1.08E+03 ± 3.56E+01 + 1.06E+03 ± 1.40E+01 + 3.75E+03 ± 9.09E+02 + 8.15E+03 ± 2.45E+02 + 1.33E+03 ± 2.26E+01 +
SCA 1.12E+03 ± 2.88E+01 + 1.05E+03 ± 1.63E+01 + 5.52E+03 ± 1.10E+03 + 8.12E+03 ± 3.34E+02 + 2.19E+03 ± 3.99E+02 +
ABC 9.43E+02 ± 9.71E+00 + 1.02E+03 ± 1.16E+01 + 1.90E+03 ± 4.45E+02 + 8.10E+03 ± 3.19E+02 + 4.37E+03 ± 7.31E+02 +
WOA 1.21E+03 ± 9.33E+01 + 9.99E+02 ± 3.20E+01 + 6.54E+03 ± 2.35E+03 + 6.06E+03 ± 9.74E+02 + 1.45E+03 ± 1.15E+02 +
GWO 8.35E+02 ± 4.95E+01 + 8.81E+02 ± 1.25E+01 + 1.18E+03 ± 1.39E+02 + 3.73E+03 ± 5.49E+02 + 1.51E+03 ± 4.42E+02 +
NCS 1.30E+03 ± 7.44E+01 + 1.08E+03 ± 2.23E+01 + 1.85E+04 ± 2.41E+03 + 4.89E+03 ± 2.32E+02 + 1.36E+03 ± 4.70E+01 +

F12 F13 F14 15 F16

ALGSA 1.29E+04 ± 1.14E+04 1.61E+03 ± 1.90E+03 1.51E+03 ± 1.89E+04 3.10E+03 ± 1.38E+03 2.33E+03 ± 2.23E+02
DE 5.10E+07 ± 1.37E+07 + 4.23E+03 ± 4.65E+02 + 1.49E+03 ± 5.89E+00 ≈ 1.72E+03 ± 3.22E+01 − 3.17E+03 ± 2.65E+02 +
SCA 1.21E+09 ± 2.30E+08 + 4.07E+08 ± 1.98E+08 + 1.19E+05 ± 7.05E+04 + 1.56E+07 ± 1.29E+07 + 3.64E+03 ± 2.13E+02 +
ABC 1.17E+08 ± 2.66E+07 + 8.02E+07 ± 3.32E+07 + 3.04E+05 ± 1.25E+05 + 2.08E+07 ± 8.65E+06 + 3.76E+03 ± 1.87E+02 +
WOA 4.47E+07 ± 3.11E+07 + 1.35E+05 ± 1.44E+05 + 7.27E+05 ± 6.79E+05 + 6.97E+04 ± 4.48E+04 + 3.47E+03 ± 5.17E+02 +
GWO 3.31E+07 ± 3.81E+07 + 6.63E+06 ± 2.33E+07 + 8.10E+04 ± 1.76E+05 + 2.44E+05 ± 5.82E+05 + 2.32E+03 ± 2.39E+02 ≈
NCS 1.78E+07 ± 4.20E+06 + 2.40E+06 ± 5.67E+05 + 1.04E+04 ± 4.75E+03 + 2.36E+05 ± 8.69E+04 + 2.77E+03 ± 1.67E+02 +

F17 F18 F19 F20 F21

ALGSA 1.99E+03 ± 1.49E+02 7.21E+04 ± 4.68E+04 8.26E+03 ± 3.79E+03 2.19E+03 ± 4.06E+01 2.32E+03 ± 5.40E+00
DE 2.17E+03 ± 2.10E+02 + 6.72E+03 ± 2.01E+03 − 1.96E+03 ± 7.52E+00 − 2.30E+03 ± 2.08E+02 ≈ 2.54E+03 ± 1.37E+01 +
SCA 2.42E+03 ± 1.64E+02 + 2.80E+06 ± 1.21E+06 + 2.48E+07 ± 1.13E+07 + 2.61E+03 ± 1.29E+02 + 2.56E+03 ± 1.92E+01 +
ABC 2.49E+03 ± 1.19E+02 + 6.34E+06 ± 3.20E+06 + 2.39E+07 ± 1.03E+07 + 2.74E+03 ± 8.15E+01 + 2.52E+03 ± 1.18E+01 +
WOA 2.53E+03 ± 2.16E+02 + 3.02E+06 ± 2.58E+06 + 2.45E+06 ± 2.03E+06 + 2.78E+03 ± 1.76E+02 + 2.56E+03 ± 6.24E+01 +
GWO 1.93E+03 ± 1.16E+02 ≈ 7.75E+05 ± 1.40E+06 + 2.06E+05 ± 3.88E+05 + 2.33E+03 ± 1.66E+02 + 2.37E+03 ± 1.85E+01 +
NCS 2.07E+03 ± 8.93E+01 + 1.54E+05 ± 4.83E+04 + 9.83E+05 ± 3.62E+05 + 2.59E+03 ± 1.04E+02 + 2.29E+03 ± 1.38E+02 −

F22 F23 F24 F25 F26

ALGSA 2.30E+03 ± 1.91E−09 2.65E+03 ± 1.82E+01 2.76E+03 ± 3.17E+01 2.89E+03 ± 7.06E−02 2.87E+03 ± 4.79E+01
DE 2.51E+03 ± 4.77E+01 + 2.88E+03 ± 1.38E+01 + 3.04E+03 ± 1.01E+01 + 3.02E+03 ± 3.21E+01 + 6.20E+03 ± 8.99E+01 +
SCA 8.25E+03 ± 2.37E+03 + 2.99E+03 ± 2.34E+01 + 3.16E+03 ± 2.96E+01 + 3.20E+03 ± 4.91E+01 + 6.87E+03 ± 2.56E+02 +
ABC 2.64E+03 ± 2.08E+02 + 2.89E+03 ± 1.60E+01 + 3.04E+03 ± 1.17E+01 + 2.89E+03 ± 1.73E−01 + 5.74E+03 ± 1.28E+02 +
WOA 6.65E+03 ± 1.87E+03 + 3.05E+03 ± 8.54E+01 + 3.16E+03 ± 9.15E+01 + 2.94E+03 ± 2.73E+01 + 7.24E+03 ± 1.00E+03 +
GWO 4.47E+03 ± 1.45E+03 + 2.73E+03 ± 3.04E+01 + 2.90E+03 ± 4.70E+01 + 2.96E+03 ± 2.69E+01 + 4.43E+03 ± 2.45E+02 +
NCS 2.53E+03 ± 7.39E+02 + 2.95E+03 ± 1.25E+02 + 2.92E+03 ± 2.94E+02 ≈ 2.92E+03 ± 1.54E+01 + 3.01E+03 ± 1.03E+02 +

F27 F28 F29 F30 w/t/l

ALGSA 3.23E+03 ± 1.21E+01 3.17E+03 ± 5.56E+01 3.40E+03 ± 7.82E+01 7.38E+03 ± 5.68E+02


DE 3.26E+03 ± 1.25E+01 + 3.38E+03 ± 3.38E+01 + 4.31E+03 ± 1.45E+02 + 1.97E+05 ± 6.56E+04 + 24/ 2/ 3
SCA 3.39E+03 ± 4.83E+01 + 3.78E+03 ± 1.36E+02 + 4.62E+03 ± 2.62E+02 + 7.44E+07 ± 2.59E+07 + 28/ 0/ 1
ABC 3.46E+03 ± 3.87E+01 + 3.26E+03 ± 2.59E+01 + 4.93E+03 ± 1.31E+02 + 2.67E+07 ± 1.04E+07 + 29/ 0/ 0
WOA 3.36E+03 ± 8.88E+01 + 3.31E+03 ± 3.96E+01 + 5.00E+03 ± 4.58E+02 + 1.04E+07 ± 6.42E+06 + 29/ 0/ 0
GWO 3.23E+03 ± 1.78E+01 + 3.33E+03 ± 4.83E+01 + 3.71E+03 ± 1.26E+02 + 3.90E+06 ± 3.10E+06 + 26/ 2/ 1
NCS 3.29E+03 ± 2.03E+01 + 3.27E+03 ± 2.00E+01 + 4.18E+03 ± 8.86E+01 + 1.80E+06 ± 4.41E+05 + 27/ 1/ 1

• F11.3 to F11.7 : Five static economic dispatch problems; algorithm that suffers from the difficulties of falling into the local
• F11.8 to F11.10 : Three hydrothermal scheduling problems; minima and the proliferation of saddle points. Therefore, non-BP
• F12 and F13 : Two spacecraft trajectory optimization problems. learning algorithms are adopted to train the ANN. Thus, in this sec-
tion, to further verify the performance of the ALGSA, the ALGSA is
The experimental results are shown in Table 6, include the
utilized to train the dendritic neuron model (DNM).
mean, standard deviation, and the outcome of Wilcoxon-sign-test.
Due to the nonlinearity of synapses, the DNM is proposed and
From this table, it is easily observed that the ALGSA has better
adapted for classification and prediction problems (Chen et al.,
performance than other variants of GSA. To further verify the per-
2017; Jiang et al., 2017; Tang et al., 2018; Zhou et al., 2016). The
formance, the Friedman test is implemented and the results are
DNM is composed of four layers including a synaptic layer, a den-
shown in Table 7. ALGSA gets the best ranking among variants of
drite layer, a membrane layer, and a soma layer (Gao et al., 2018).
GSA. Therefore, the ALGSA has better potential to solve real-world
The synaptic layer utilizes a sigmoid function to conduct received
problems.
input information. The dendrite layer processes the multiplication
4.7. Artificial neural network training function for the signals of the synaptic layer. The membrane layer
collects the signals of the dendrite layer. The soma layer uses a
The artificial neural network (ANN) has achieved great success sigmoid function to calculate the output of the DNM. The training
in real-world problems. However, the training of the ANN is a diffi- process of the DNM is a complex and difficult problem. The goal
cult problem due to the traditional back-propagation (BP) learning of the training is to minimize the sum of errors by optimizing the
Z. Lei, S. Gao and S. Gupta et al. / Expert Systems With Applications 152 (2020) 113396 13

Fig. 7. The box-and-whisker diagrams of optimal solutions obtained by heuristic algorithms on F5,F8,F10,F23,F27,F29.

Fig. 8. The convergence graphs of average best-so-far solutions obtained by heuristic algorithms on F5,F8,F10,F23,F27,F29.

weight w and threshold θ . Thus, BP, the GSA, and the ALGSA are and value range of the test samples. A reasonable combination of
used to train the DNM for classification, approximation and predic- the DNM parameters is shown in Table 11.
tion problems to verify the effectiveness of the ALGSA. The experimental results of the DNM being training by BP, the
Two classification and two function approximations problems GSA, and the ALGSA are listed inTable 8, where the best results are
are used to test the effectiveness of the ALGSA for training the highlighted. The significant differences between BP, the GSA, and
DNM. The classification problems, i.e., XOR and balloon are se- the ALGSA are analyzed by the Wilcoxon rank-sum test. From this
lected from the University of California at Irvine Machine Learn- table, it can be observed that the ALGSA and GSA obtain better
ing Repository. The numbers of attributes, training samples, test results than BP, which indicates that the evolutionary algorithms
samples, and classes are listed in Table 9. The function approxi- have better optimal performance than conventional learning algo-
mation problems are 1-D cosine samples with one peak and 1-D rithm when training the DNM. In addition, the ALGSA achieves bet-
sine with four peaks. Table 10 lists their functional expressions, the ter experimental results than the GSA, which implies that the AL-
number and values range of the training samples, and the number GSA is superior than GSA in training the DNM.
14 Z. Lei, S. Gao and S. Gupta et al. / Expert Systems With Applications 152 (2020) 113396

Table 6
The experimental results of ALGSA and variants of GSA on real-world problems.

Algorihm F1 F2 F3 F4 F5

ALGSA 2.51E+01 ± 9.84E−01 −2.55E+01 ± 2.27E+00 1.15E−05 ± 2.17E−14 1.78E+01 ± 2.42E+00 −3.27E+01 ± 2.35E+00
DNLGSA 2.21E+01 ± 3.48E+00 + −1.98E+01 ± 4.32E+00 − 1.15E−05 ± 3.51E−19 + 1.67E+01 ± 3.31E+00 + −3.13E+01 ± 2.69E+00 +
CGSA 2.55E+01 ± 1.11E+00 + −1.79E+01 ± 3.49E+00 + 1.15E−05 ± 3.94E−19 + 1.87E+01 ± 2.81E+00 + −3.08E+01 ± 3.36E+00 +
GSA 2.58E+01 ± 1.76E+00 + −1.75E+01 ± 3.58E+00 + 1.15E−05 ± 4.17E−19 + 1.91E+01 ± 2.54E+00 + −2.98E+01 ± 3.16E+00 +
GGSA 2.21E+01 ± 3.61E+00 + −2.54E+01 ± 1.76E+00 + 1.15E−05 ± 4.81E−19 + 1.64E+01 ± 2.87E+00 + −3.20E+01 ± 2.65E+00 +
HGSA 1.49E+01 ± 4.92E+00 + −2.37E+01 ± 2.64E+00 ≈ 1.15E−05 ± 3.44E−12 + 1.60E+01 ± 1.93E+00 + −3.22E+01 ± 2.47E+00 +
MGSA 2.13E+01 ± 3.59E+00 + −2.65E+01 ± 1.46E+00 ≈ 1.15E−05 ± 4.49E−19 + 1.59E+01 ± 2.72E+00 + −2.60E+01 ± 2.94E+00 +
PSOGSA 1.58E+01 ± 4.50E+00 + −2.53E+01 ± 2.36E+00 − 1.15E−05 ± 3.59E−19 + 1.49E+01 ± 2.06E+00 + −3.38E+01 ± 1.48E+00 +
IGSA 2.68E+01 ± 2.97E+00 + −2.15E+01 ± 2.06E+00 + 1.15E−05 ± 5.17E−21 + 1.97E+01 ± 2.52E+00 + −3.04E+01 ± 2.68E+00 -

F6 F7 F8 F9 F10

ALGSA −2.10E+01 ± 1.38E+00 6.94E−01 ± 1.43E−01 2.37E+02 ± 1.52E+01 5.97E+03 ± 7.65E+03 −1.47E+01 ± 8.61E−01
DNLGSA −2.01E+01 ± 3.61E+00 + 9.77E−01 ± 2.48E−01 + 2.95E+02 ± 1.14E+02 + 6.37E+05 ± 9.78E+04 + −1.37E+01 ± 2.45E+00 +
CGSA −1.80E+01 ± 2.47E+00 + 9.62E−01 ± 1.96E−01 + 2.87E+02 ± 6.59E+01 + 1.41E+03 ± 2.35E+02 + −1.12E+01 ± 3.73E−01 +
GSA −1.83E+01 ± 2.72E+00 + 9.74E−01 ± 1.84E−01 + 2.86E+02 ± 6.48E+01 + 1.36E+03 ± 1.76E+02 + −1.14E+01 ± 4.47E−01 +
GGSA −2.09E+01 ± 2.03E+00 − 9.35E−01 ± 1.77E−01 + 2.49E+02 ± 2.77E+01 ≈ 1.68E+05 ± 3.48E+04 + −1.20E+01 ± 4.10E−01 +
HGSA −2.18E+01 ± 2.28E+00 − 6.93E−01 ± 1.40E−01 + 2.21E+02 ± 2.79E+00 − 1.79E+05 ± 4.17E+04 + −1.25E+01 ± 5.67E−01 +
MGSA −2.09E+01 ± 3.79E+00 + 1.66E+00 ± 2.46E−01 + 2.24E+02 ± 1.05E+01 + 6.02E+05 ± 7.46E+04 + −1.19E+01 ± 1.82E+00 +
PSOGSA −2.57E+01 ± 3.20E+00 + 1.18E+00 ± 2.41E−01 + 2.24E+02 ± 1.63E+01 + 2.26E+06 ± 1.53E+05 + −1.52E+01 ± 3.21E+00 +
IGSA −2.08E+01 ± 2.54E+00 ≈ 1.23E+00 ± 1.56E−01 + 9.79E+02 ± 4.90E+02- 8.15E+05 ± 1.14E+05 + −1.29E+01 ± 8.46E−01 +

F11.1 F11.2 F11.3 F11.4 F11.5

ALGSA 5.11E+04 ± 4.87E+02 1.84E+07 ± 8.53E+04 1.56E+04 ± 1.84E+02 1.92E+04 ± 1.69E+02 3.33E+04 ± 2.26E+01
DNLGSA 1.61E+06 ± 5.18E+05 + 4.21E+07 ± 1.62E+06 + 1.64E+04 ± 3.12E+03 + 1.94E+04 ± 1.67E+02 + 8.32E+04 ± 5.14E+04 +
CGSA 5.21E+04 ± 3.73E+02 + 3.24E+07 ± 8.31E+05 + 7.66E+04 ± 5.76E+04 + 1.92E+04 ± 1.32E+02 + 3.23E+05 ± 4.20E+04 +
GSA 5.21E+04 ± 4.43E+02 + 3.23E+07 ± 1.12E+06 + 1.12E+05 ± 5.71E+04 + 1.92E+04 ± 1.02E+02 + 3.43E+05 ± 2.76E+04 +
GGSA 5.22E+04 ± 3.66E+02 + 2.52E+07 ± 6.38E+05 + 7.34E+04 ± 3.38E+04 + 1.92E+04 ± 1.13E+02 + 2.63E+05 ± 4.04E+04 +
HGSA 5.13E+04 ± 5.31E+02 + 2.05E+07 ± 1.93E+05 + 4.48E+04 ± 3.54E+04 + 1.92E+04 ± 1.28E+02 ≈ 3.33E+04 ± 2.21E+01 +
MGSA 5.50E+04 ± 9.07E+03 + 3.38E+07 ± 1.30E+06 + 1.61E+04 ± 2.81E+03 + 1.93E+04 ± 1.62E+02 + 3.47E+04 ± 5.71E+03 +
PSOGSA 1.81E+06 ± 6.01E+05 + 4.40E+07 ± 1.89E+06 + 1.55E+04 ± 2.54E+01 + 1.94E+04 ± 1.66E+02 + 6.29E+04 ± 4.47E+04 +
IGSA 5.11E+04 ± 4.87E+02 + 2.72E+07 ± 1.16E+06 + 4.45E+04 ± 3.63E+04 + 1.92E+04 ± 1.53E+02 + 1.47E+05 ± 5.92E+04 +

F11.6 F11.7 F11.8 F11.9 F11.10

ALGSA 1.41E+05 ± 1.89E+03 2.00E+06 ± 9.02E+04 9.44E+05 ± 2.25E+03 9.43E+05 ± 1.66E+03 9.44E+05 ± 2.31E+03
DNLGSA 1.45E+05 ± 5.32E+03 + 1.01E+08 ± 4.31E+08 + 2.08E+07 ± 6.59E+06 ≈ 2.13E+07 ± 5.88E+06 + 2.00E+07 ± 5.42E+06 +
CGSA 1.45E+05 ± 2.28E+03 + 2.60E+06 ± 6.30E+05 + 9.42E+05 ± 1.46E+03 + 9.89E+05 ± 5.40E+04 + 9.42E+05 ± 1.47E+03 +
GSA 1.47E+05 ± 2.20E+03 + 2.72E+06 ± 7.56E+05 + 9.42E+05 ± 1.50E+03 + 9.81E+05 ± 3.71E+04 + 9.42E+05 ± 1.28E+03 +
GGSA 1.45E+05 ± 1.23E+03 + 2.18E+06 ± 3.32E+05 + 9.42E+05 ± 1.33E+03 ≈ 1.07E+06 ± 7.59E+04 + 9.42E+05 ± 1.35E+03 +
HGSA 1.43E+05 ± 1.96E+03 + 1.94E+06 ± 5.57E+03 ≈ 9.43E+05 ± 1.75E+03 ≈ 1.15E+06 ± 7.85E+04 + 9.43E+05 ± 2.32E+03 +
MGSA 1.45E+05 ± 4.00E+03 + 1.97E+06 ± 9.47E+04 + 1.33E+06 ± 8.50E+05 ≈ 1.52E+06 ± 2.84E+05 + 1.26E+06 ± 5.28E+05 +
PSOGSA 1.44E+05 ± 5.60E+03 + 9.58E+08 ± 1.27E+09 ≈ 6.20E+07 ± 1.22E+07 ≈ 6.75E+07 ± 1.38E+07 + 6.77E+07 ± 1.37E+07 +
IGSA 1.42E+05 ± 1.54E+03 + 1.94E+06 ± 5.56E+03 + 3.67E+06 ± 2.97E+06 + 6.26E+06 ± 4.60E+06 + 4.49E+06 ± 3.71E+06 +

F12 F13 w/t/l

ALGSA 1.73E+01 ± 3.43E+00 2.91E+01 ± 4.83E+00


DNLGSA 2.37E+01 ± 5.46E+00 + 2.80E+01 ± 4.17E+00 + 17/0/5
CGSA 3.93E+01 ± 8.01E+00 + 5.09E+01 ± 8.61E+00 + 16/3/3
GSA 4.14E+01 ± 6.64E+00 + 5.40E+01 ± 8.96E+00 + 17/2/3
GGSA 3.08E+01 ± 5.45E+00 − 4.06E+01 ± 9.73E+00 + 12/5/5
HGSA 2.73E+01 ± 4.96E+00 ≈ 3.66E+01 ± 5.93E+00 + 11/7/4
MGSA 2.43E+01 ± 6.50E+00 ≈ 3.05E+01 ± 4.11E+00 + 12/3/7
PSOGSA 2.31E+01 ± 7.01E+00 + 2.90E+01 ± 5.95E+00 + 10/6/6
IGSA 2.68E+01 ± 5.50E+00 − 3.30E+01 ± 4.13E+00 + 16/4/2

Table 7
The Friedman test results.

Algorithm ALGSA DNLGSA CGSA GSA GGSA HGSA MGSA PSOGSA IGSA

Ranking 1 8 7 9 3 2 5 4 6

Table 8
The experimental results of DNM training by BP, GSA and ALGSA.

BP GSA ALGSA
Mean(std) Mean(std) Mean(std)

XOR 2.57E−01(1.29E−02) 2.46E−01(5.98E−02) 1.31E−01 +(8.04E−02)


Balloon 1.12E−01(3.84E−02) 2.07E−04(5.26E−04) 7.05E−05 +(2.69E−04)
Sine 3.75E−01(2.09E−10) 3.06E−01(3.64E−02) 2.71E−01 +(2.85E−02)
Cosine 2.70E−01(1.56E−09) 2.62E−01(4.39E−02) 2.34E−01 + (7.49E−02)
Z. Lei, S. Gao and S. Gupta et al. / Expert Systems With Applications 152 (2020) 113396 15

Table 9
Details of the classification data sets.

Classification data sets # of attributes # of training samples # of test samples # of classes

3-bits XOR 3 8 8 2
Balloon 4 16 16 2

Table 10
Details of function approximation problems.

Function approximation datasets # of training samples # of test samples

Sine: y = sin(2x ) 126 : x ∈ [−2π : 0.1 : 2π ] 252 : x ∈ [−2π : 0.05 : 2π ]


Cosine: y = cos(xπ /2 )7 31: x ∈ [1.25: 0.05: 2.75] 38: x ∈ [1.25: 0.04: 2.75]

Table 11 justment amplitude is significantly better than the random adjust-


The reasonable parameters of
ment on 15 benchmark functions.
DNM for tested problems.

Problems M k θs
5.3. Analysis of aggregative learning
XOR 6 25 0.3
Balloon 7 25 0.9
To further verify the effectiveness of the ALGSA, it has been
Sine 22 5 0.7
Cosine 25 15 0.5 compared with three different variants of the GSA namely, the
self-adaptive gravitational constant GSA (GSA-SAG), the aggrega-
tive learning GSA (GSA-K), and the enhance elite learning GSA with
the self-adaptive gravitational constant (ELGSA). The GSA-SAG only
5. Discussions
modifies the self-adaptive gravitational constant in conventional
GSA. The GSA-K only adds aggregative learning in conventional
The above experiments have verified that ALGSA has better per-
GSA. Finally, the ELGSA enhances elite learning by directly multi-
formance than other algorithms on benchmark function, neural
plying the elite individual by a coefficient. The experimental results
network training, and real-world problems. In this section, ALGSA
are shown in Table 14. From this table, it can be observed that the
is further analyzed based on parameter setting, self-adaptive grav-
ALGSA is significantly better than the GSA-SAG and GSA-K, which
itational constant, aggregative learning, and computational com-
indicates that aggregative learning with the self-adaptive gravita-
plexity.
tion constant is significantly better than using only the modified
self-adaptive gravitational constant and only adding aggregative
5.1. Analysis of preset parameters learning. Therefore, the combination of aggregative learning and
the self-adaptive gravitational constant is an effective approach to
The ALGSA has two important parameters i.e. θ and p, that af- solve the IEEE CEC2017 benchmarks. The comparison of the ALGSA
fect the search performance. These parameters are the adjustment and ELGSA indicates that aggregative learning is significantly bet-
frequency and opportunity of the gravitational constant, respec- ter elite learning. Although, aggregative learning and elite learn-
tively. A small θ and p can make an individual frequently leap, ing enhance the learning of the best individual, elite learning di-
which means that the individual cannot converge. However, a large rectly multiplies the elite individual by a coefficient, and aggrega-
θ and p cause the individual to miss the globally optimal region, tive learning repeatedly utilizes the elite individual and their grav-
and converge to a locally optimal region resulting in premature itational constant to create a new gravitational field. Therefore, ag-
convergence. To find the best settings for the parameters θ and gregative learning is significantly better than elite learning.
p, the experiments are conducted on the IEEE CEC2017 bench-
mark set with the values of these parameters varying in {1,2,3} and
5.4. Computational complexity
{0.2,0.5,0.8}, respectively. These experimental results are shown in
Table 12. From this table, it can be verified that the settings θ = 2
The above experiments have proven that the ALGSA is effective
and p = 0.5 are better parameter settings.
on the benchmark functions and neural network training. The time
complexity of the ALGSA is analyzed as follow:
5.2. The self-adaptive gravitational constant analysis
(1) The time complexity of the initialization process is O(N);
In the proposed self-adaptive gravitational constant, the adjust- (2) Evaluating the fitness of each individual needs O(N);
ment amplitude of gravitational constant is calculated using the (3) The time complexity of boundary control is O(N);
current acceleration and the gravitational constant of an individ- (4) Calculating the mass of each individual needs O(N);
ual. In the ALGSA, the ratio of the acceleration and the gravita- (5) Producing the gravitational constant of each individual has
tional constant represents the population structure and implies the O(N) complexity;
gravitational constant difference between the individual and the (6) The gravitational force of aggregative learning is calculated
new gravitational field. In the total gravitational force calculation, by K best individuals, and this operation requires O(N3 ) cost;
the new gravitational constant is the average of the gravitational and
constant of several individuals. Moreover, the distance among indi- (7) Updating the velocities and positions of all the individuals
viduals and the masses of individuals are utilized to calculate the need O(N)
gravitational force, and they determine the population structure.
Therefore, the ratio is used to adjust the gravitational constant. Hence, the total time complexity of ALGSA is :
Furthermore, the ratio adjustment amplitude is compared with the O ( N ) + T [ O ( N ) + O ( N ) + 3 O ( N ) + 2 O ( N ) + O ( N 3 ) + 2 O ( N )]
random adjustment amplitude. The experimental results are shown
in Table 13. From this table, it can be observed that the ratio ad- = ( 7T + 1 )O ( N ) + T O ( N 3 ) (17)
16 Z. Lei, S. Gao and S. Gupta et al. / Expert Systems With Applications 152 (2020) 113396

Table 12
Experimental results of different parameters θ and p.

F1 F3 F4 F5 F6

θ = 2, p = 0.5 3.54E+02 ± 2.24E+02 4.58E+04 ± 9.94E+03 4.27E+02 ± 7.07E−01 5.18E+02 ± 4.11E+00 6.00E+02 ± 5.30E−07
θ = 1, p = 0.2 2.11E+03 ± 2.26E+03 + 5.55E+04 ± 1.31E+04 + 5.08E+02 ± 2.72E+01 + 5.18E+02 ± 3.84E+00 ≈ 6.00E+02 ± 1.64E−07 -
θ = 1, p = 0.5 2.06E+03 ± 2.18E+03 + 6.22E+04 ± 1.52E+04 + 5.08E+02 ± 2.55E+01 + 5.20E+02 ± 5.79E+00 + 6.00E+02 ± 6.46E−07 -
θ = 1, p = 0.8 2.66E+03 ± 2.59E+03 + 5.76E+04 ± 1.82E+04 + 5.04E+02 ± 2.82E+01 + 5.19E+02 ± 5.11E+00 ≈ 6.00E+02 ± 8.94E−07 -
θ = 2, p = 0.2 2.54E+03 ± 2.85E+03 + 4.64E+04 ± 1.46E+04 ≈ 5.14E+02 ± 1.04E+01 + 5.18E+02 ± 5.09E+00 ≈ 6.00E+02 ± 1.96E−07 -
θ = 2, p = 0.8 2.48E+03 ± 2.41E+03 + 4.56E+04 ± 1.12E+04 ≈ 5.17E+02 ± 4.82E+00 + 5.17E+02 ± 3.88E+00 ≈ 6.00E+02 ± 3.35E−07 -
θ = 3, p = 0.2 1.78E+03 ± 1.53E+03 + 5.02E+04 ± 1.46E+04 ≈ 5.08E+02 ± 2.10E+01 + 5.17E+02 ± 3.63E+00 ≈ 6.00E+02 ± 1.31E−07 -
θ = 3, p = 0.5 2.50E+03 ± 2.33E+03 + 5.06E+04 ± 1.70E+04 ≈ 5.14E+02 ± 1.44E+01 + 5.17E+02 ± 3.23E+00 ≈ 6.00E+02 ± 1.48E−07 -
θ = 3, p = 0.8 2.38E+03 ± 2.72E+03 + 4.97E+04 ± 1.26E+04 ≈ 5.09E+02 ± 2.90E+01 + 5.17E+02 ± 3.62E+00 ≈ 6.00E+02 ± 1.48E−07 -

F7 F8 F9 F10 F11

θ = 2, p = 0.5 7.42E+02 ± 2.67E+00 8.16E+02 ± 3.17E+00 9.00E+02 ± 1.57E−11 1.87E+03 ± 3.17E+02 1.14E+03 ± 1.69E+01
θ = 1, p = 0.2 7.44E+02 ± 3.78E+00 + 8.16E+02 ± 3.37E+00 ≈ 9.00E+02 ± 7.61E−14 − 2.32E+03 ± 2.63E+02 + 1.18E+03 ± 3.53E+01 +
θ = 1, p = 0.5 7.45E+02 ± 3.92E+00 + 8.16E+02 ± 4.20E+00 ≈ 9.00E+02 ± 5.97E−14 − 2.29E+03 ± 3.88E+02 + 1.18E+03 ± 3.21E+01 +
θ = 1, p = 0.8 7.45E+02 ± 3.40E+00 + 8.17E+02 ± 4.52E+00 ≈ 9.00E+02 ± 4.72E−14 − 2.31E+03 ± 3.25E+02 + 1.17E+03 ± 2.59E+01 +
θ = 2, p = 0.2 7.43E+02 ± 3.37E+00 ≈ 8.15E+02 ± 2.97E+00 ≈ 9.00E+02 ± 6.33E−14 − 2.48E+03 ± 3.56E+02 + 1.15E+03 ± 2.82E+01 ≈
θ = 2, p = 0.8 7.42E+02 ± 2.84E+00 ≈ 8.17E+02 ± 3.84E+00 ≈ 9.00E+02 ± 2.99E−14 − 2.40E+03 ± 3.60E+02 + 1.17E+03 ± 3.12E+01 +
θ = 3, p = 0.2 7.42E+02 ± 3.11E+00 ≈ 8.16E+02 ± 4.04E+00 ≈ 9.00E+02 ± 6.68E−14 − 2.36E+03 ± 3.19E+02 + 1.16E+03 ± 3.09E+01 ≈
θ = 3, p = 0.5 7.42E+02 ± 3.21E+00 ≈ 8.16E+02 ± 2.88E+00 ≈ 9.00E+02 ± 8.18E−14 − 2.37E+03 ± 3.26E+02 + 1.14E+03 ± 2.69E+01 ≈
θ = 3, p = 0.8 7.43E+02 ± 3.79E+00 ≈ 8.15E+02 ± 3.34E+00 ≈ 9.00E+02 ± 7.00E−14 − 2.39E+03 ± 3.80E+02 + 1.16E+03 ± 3.16E+01 ≈

F12 F13 F14 F15 F16

θ = 2, p = 0.5 1.29E+04 ± 4.97E+03 1.61E+03 ± 4.28E+02 1.51E+03 ± 4.18E+01 3.10E+03 ± 1.79E+03 2.33E+03 ± 2.37E+02
θ = 1, p = 0.2 3.31E+04 ± 1.75E+04 + 4.98E+03 ± 1.54E+03 + 1.51E+04 ± 1.57E+04 + 3.87E+03 ± 2.40E+03 ≈ 2.24E+03 ± 2.44E+02 ≈
θ = 1, p = 0.5 3.71E+04 ± 2.41E+04 + 7.96E+03 ± 2.31E+03 + 2.12E+04 ± 3.50E+04 + 2.78E+03 ± 1.46E+03 ≈ 2.17E+03 ± 2.49E+02 -
θ = 1, p = 0.8 2.92E+04 ± 1.79E+04 + 8.59E+03 ± 2.75E+03 + 1.26E+04 ± 1.53E+04 + 3.22E+03 ± 1.64E+03 ≈ 2.21E+03 ± 2.18E+02 -
θ = 2, p = 0.2 2.74E+04 ± 1.22E+04 + 5.47E+03 ± 2.16E+03 + 2.71E+04 ± 3.72E+04 + 2.50E+03 ± 1.19E+03 ≈ 2.28E+03 ± 2.42E+02 ≈
θ = 2, p = 0.8 2.22E+04 ± 9.13E+03 + 5.49E+03 ± 1.85E+03 + 3.31E+04 ± 5.15E+04 + 2.64E+03 ± 1.45E+03 ≈ 2.32E+03 ± 1.82E+02 ≈
θ = 3, p = 0.2 2.60E+04 ± 1.21E+04 + 5.02E+03 ± 2.55E+03 + 1.10E+04 ± 1.26E+04 + 3.42E+03 ± 1.82E+03 ≈ 2.21E+03 ± 2.51E+02 -
θ = 3, p = 0.5 2.37E+04 ± 1.43E+04 + 5.40E+03 ± 2.67E+03 + 1.87E+04 ± 2.78E+04 + 2.79E+03 ± 1.45E+03 ≈ 2.29E+03 ± 2.08E+02 ≈
θ = 3, p = 0.8 2.90E+04 ± 1.21E+04 + 4.48E+03 ± 1.96E+03 + 3.10E+04 ± 4.41E+04 + 3.02E+03 ± 1.32E+03 ≈ 2.18E+03 ± 2.15E+02 -

F17 F18 F19 F20 F21

θ = 2, p = 0.5 1.99E+03 ± 1.71E+02 7.21E+04 ± 2.52E+04 8.26E+03 ± 5.42E+03 2.19E+03 ± 6.94E+01 2.32E+03 ± 6.99E+00
θ = 1, p = 0.2 2.00E+03 ± 1.27E+02 ≈ 7.71E+04 ± 3.14E+04 ≈ 7.35E+03 ± 4.08E+03 ≈ 2.20E+03 ± 7.19E+01 + 2.32E+03 ± 5.67E+00 ≈
θ = 1, p = 0.5 1.97E+03 ± 1.07E+02 ≈ 6.75E+04 ± 2.69E+04 ≈ 7.53E+03 ± 4.41E+03 ≈ 2.19E+03 ± 5.90E+01 + 2.32E+03 ± 6.63E+00 ≈
θ = 1, p = 0.8 1.97E+03 ± 1.32E+02 ≈ 8.66E+04 ± 5.26E+04 ≈ 9.02E+03 ± 5.16E+03 ≈ 2.20E+03 ± 7.51E+01 + 2.32E+03 ± 6.80E+00 ≈
θ = 2, p = 0.2 1.97E+03 ± 1.24E+02 ≈ 6.84E+04 ± 2.52E+04 ≈ 8.40E+03 ± 5.49E+03 ≈ 2.18E+03 ± 4.57E+01 ≈ 2.32E+03 ± 5.28E+00 ≈
θ = 2, p = 0.8 2.00E+03 ± 1.28E+02 ≈ 6.71E+04 ± 3.44E+04 ≈ 7.51E+03 ± 4.95E+03 ≈ 2.17E+03 ± 3.39E+01 − 2.32E+03 ± 5.80E+00 ≈
θ = 3, p = 0.2 1.96E+03 ± 1.44E+02 ≈ 7.99E+04 ± 5.06E+04 ≈ 7.34E+03 ± 5.36E+03 ≈ 2.21E+03 ± 1.19E+02 + 2.32E+03 ± 7.32E+00 ≈
θ = 3, p = 0.5 1.95E+03 ± 1.15E+02 ≈ 7.12E+04 ± 3.05E+04 ≈ 8.03E+03 ± 5.08E+03 ≈ 2.19E+03 ± 6.11E+01 ≈ 2.32E+03 ± 5.87E+00 ≈
θ = 3, p = 0.8 1.98E+03 ± 1.13E+02 ≈ 7.96E+04 ± 5.81E+04 ≈ 8.42E+03 ± 5.43E+03 ≈ 2.19E+03 ± 5.43E+01 + 2.32E+03 ± 7.61E+00 ≈

F22 F23 F24 F25 F26

θ = 2, p = 0.5 2.30E+03 ± 6.30E−06 2.65E+03 ± 1.70E+01 2.76E+03 ± 3.96E+01 2.89E+03 ± 6.19E−02 2.87E+03 ± 4.66E+01
θ = 1, p = 0.2 2.30E+03 ± 1.83E−09 − 2.66E+03 ± 1.99E+01 ≈ 2.77E+03 ± 3.70E+01 ≈ 2.89E+03 ± 5.44E−02 ≈ 2.85E+03 ± 5.07E+01 -
θ = 1, p = 0.5 2.30E+03 ± 2.60E−09 ≈ 2.66E+03 ± 1.96E+01 ≈ 2.78E+03 ± 3.84E+01 ≈ 2.89E+03 ± 5.28E−02 ≈ 2.87E+03 ± 4.50E+01 ≈
θ = 1, p = 0.8 2.30E+03 ± 2.50E−09 ≈ 2.66E+03 ± 1.87E+01 ≈ 2.77E+03 ± 4.22E+01 ≈ 2.89E+03 ± 6.49E−02 ≈ 2.86E+03 ± 5.04E+01 ≈
θ = 2, p = 0.2 2.30E+03 ± 1.69E−09 − 2.65E+03 ± 1.94E+01 ≈ 2.76E+03 ± 4.37E+01 ≈ 2.89E+03 ± 5.40E−02 ≈ 2.87E+03 ± 4.66E+01 -
θ = 2, p = 0.8 2.30E+03 ± 1.97E−09 ≈ 2.65E+03 ± 1.87E+01 ≈ 2.77E+03 ± 4.25E+01 ≈ 2.89E+03 ± 7.58E−02 ≈ 2.88E+03 ± 4.30E+01 ≈
θ = 3, p = 0.2 2.30E+03 ± 1.74E−09 − 2.65E+03 ± 1.92E+01 ≈ 2.75E+03 ± 4.45E+01 ≈ 2.89E+03 ± 6.67E−02 ≈ 2.87E+03 ± 4.50E+01 -
θ = 3, p = 0.5 2.30E+03 ± 2.35E−09 − 2.66E+03 ± 1.68E+01 ≈ 2.77E+03 ± 4.07E+01 ≈ 2.89E+03 ± 6.77E−02 ≈ 2.84E+03 ± 5.04E+01 -
θ = 3, p = 0.8 2.30E+03 ± 1.58E−09 − 2.66E+03 ± 1.65E+01 ≈ 2.77E+03 ± 4.08E+01 ≈ 2.89E+03 ± 6.06E−02 ≈ 2.86E+03 ± 5.04E+01 ≈

F27 F28 F29 F30 w/t/l

θ = 2, p = 0.5 3.23E+03 ± 1.49E+01 3.17E+03 ± 5.32E+01 3.40E+03 ± 6.98E+01 7.38E+03 ± 6.60E+02


θ = 1, p = 0.2 3.22E+03 ± 1.23E+01 ≈ 3.16E+03 ± 5.45E+01 ≈ 3.40E+03 ± 9.30E+01 ≈ 8.21E+03 ± 1.09E+03 + 11/ 14/ 4
θ = 1, p = 0.5 3.22E+03 ± 9.76E+00 ≈ 3.16E+03 ± 5.49E+01 ≈ 3.43E+03 ± 1.07E+02 ≈ 9.68E+03 ± 1.56E+03 + 12/ 14/ 3
θ = 1, p = 0.8 3.22E+03 ± 1.00E+01 ≈ 3.16E+03 ± 5.48E+01 ≈ 3.39E+03 ± 6.45E+01 ≈ 1.04E+04 ± 1.55E+03 + 11/ 15/ 3
θ = 2, p = 0.2 3.22E+03 ± 1.02E+01 ≈ 3.16E+03 ± 5.52E+01 ≈ 3.42E+03 ± 1.05E+02 ≈ 7.33E+03 ± 6.56E+02 ≈ 6/ 19/ 4
θ = 2, p = 0.8 3.23E+03 ± 1.16E+01 ≈ 3.15E+03 ± 5.44E+01 ≈ 3.41E+03 ± 8.60E+01 ≈ 7.27E+03 ± 6.99E+02 ≈ 7/ 19/ 3
θ = 3, p = 0.2 3.22E+03 ± 1.15E+01 ≈ 3.17E+03 ± 5.43E+01 ≈ 3.44E+03 ± 1.04E+02 ≈ 7.62E+03 ± 7.45E+02 ≈ 7/ 17/ 5
θ = 3, p = 0.5 3.22E+03 ± 1.29E+01 ≈ 3.17E+03 ± 5.34E+01 ≈ 3.38E+03 ± 4.73E+01 − 7.46E+03 ± 5.35E+02 ≈ 6/ 18/ 5
θ = 3, p = 0.8 3.22E+03 ± 1.25E+01 − 3.17E+03 ± 5.30E+01 ≈ 3.38E+03 ± 6.19E+01 − 7.55E+03 ± 9.33E+02 ≈ 7/ 16/ 5
Z. Lei, S. Gao and S. Gupta et al. / Expert Systems With Applications 152 (2020) 113396 17

Table 13 these variants of GSA. The comparison between the ALGSA and
Experimental results of ratio adjustment and rand adjustment on
other meta-heuristic algorithms verify that ALGSA is an effective
CEC2017 benchmark functions.
algorithm in meta-heuristic algorithms. In real-world problems, the
Function Ratio adjustment Rand adjustment ALGSA has a better performance than variants of GSA, it means
Mean ± Std Mean ± Std
that ALGSA can effectively deal with the real-world problems.
F1 1.59E+03 ± 1.76E+03 1.84E+03 ± 1.65E+03 + Moreover, the training of the ANN verifies that the ALGSA is signif-
F3 4.71E+04 ± 1.22E+04 5.71E+04 ± 1.09E+04 + icantly better than the back-propagation algorithm. It means that
F4 5.13E+02 ± 1.30E+01 5.19E+02 ± 1.17E+01 +
ALGSA can assist neural network to solve more complex real-world
F5 5.16E+02 ± 3.66E+00 5.19E+02 ± 4.21E+00 ≈
F6 6.00E+02 ± 1.16E−06 6.00E+02 ± 5.01E−02 + problems. Finally, the time complexity of the proposed method is
F7 7.42E+02 ± 3.01E+00 7.45E+02 ± 4.05E+00 + analyzed at the end of the work.
F8 8.15E+02 ± 3.54E+00 8.17E+02 ± 4.09E+00 ≈ This paper has verified the ALGSA is an effective algorithm, but
F9 9.00E+02 ± 7.64E−12 9.00E+02 ± 1.22E−01 +
it still suffers from high time complexity due to the complexity of
F10 2.40E+03 ± 3.82E+02 2.48E+03 ± 3.20E+02 +
F11 1.16E+03 ± 3.03E+01 1.16E+03 ± 3.37E+01 ≈ the aggregative learning strategy. In future work, simplifying ag-
F12 2.53E+04 ± 1.05E+04 5.48E+04 ± 3.33E+04 + gregative learning is an important task. This paper also verifies that
F13 5.40E+03 ± 2.07E+03 5.63E+03 ± 1.72E+03 + the search information is very useful information and provides a
F14 2.01E+04 ± 1.77E+04 2.45E+04 ± 3.00E+04 + utilized manner of search information. How to use the search in-
F15 2.82E+03 ± 1.09E+03 2.39E+03 ± 1.08E+03 ≈
formation and where search information is used are also the focus
F16 2.35E+03 ± 1.82E+02 2.37E+03 ± 2.42E+02 ≈
F17 2.00E+03 ± 1.27E+02 2.01E+03 ± 1.37E+02 ≈ of future work.
F18 7.27E+04 ± 2.95E+04 9.16E+04 ± 4.03E+04 +
F19 6.82E+03 ± 3.60E+03 7.95E+03 ± 4.37E+03 ≈ Declaration of Competing Interest
F20 2.18E+03 ± 5.22E+01 2.20E+03 ± 8.55E+01 ≈
F21 2.32E+03 ± 5.23E+00 2.32E+03 ± 5.71E+00 ≈
F22 2.30E+03 ± 7.94E−06 2.30E+03 ± 4.51E−01 + The authors declare that they have no conflicts of interest.
F23 2.65E+03 ± 1.79E+01 2.65E+03 ± 1.50E+01 ≈
F24 2.76E+03 ± 3.88E+01 2.77E+03 ± 3.60E+01 ≈
Credit authorship contribution statement
F25 2.89E+03 ± 7.56E−02 2.89E+03 ± 1.98E−01 +
F26 2.87E+03 ± 4.79E+01 2.86E+03 ± 4.89E+01 ≈
F27 3.23E+03 ± 9.88E+00 3.22E+03 ± 1.33E+01 ≈ Zhenyu Lei: Methodology, Software, Writing - original draft.
F28 3.15E+03 ± 5.51E+01 3.17E+03 ± 5.51E+01 ≈ Shangce Gao: Conceptualization, Data curation, Validation, Super-
F29 3.41E+03 ± 8.69E+01 3.43E+03 ± 8.67E+01 + vision. Shubham Gupta: Writing - review & editing. Jiujun Cheng:
F30 7.13E+03 ± 5.48E+02 8.08E+03 ± 9.53E+02 +
Writing - review & editing. Gang Yang: Software, Validation.
w/t/l 15/14/0 −

Acknowledgments
Table 14
Wilcoxon rank-sum results of ALGSA, ALGSA-
G,ALGSA-K,ALGSA-KN on CEC2017 benchmark This research was partially supported by National Natural Sci-
functions. ence Foundation of China (Grant nos. 61872271, 11972115), Beijing
ALGSA vs GSA-SAG GSA-K ELGSA Natural Science Foundation (No. 4192029), and the Fundamental
Research Funds for the Central Universities under grant no.
w/t/l 28/1/0 8/16/5 9/20/5
22120190208.

Supplementary material
Therefore, the total time complexity of ALGSA is O(N3 ). In addi-
tion, the time complexity of GSA, HGSA, MGSA, GGSA, CGSA, IGSA,
Supplementary material associated with this article can be
PSOGSA, and DNLGSA is O(N2 ), O(N2 ), O(N2 ), O(N2 ), O(N2 ), O(N4 ),
found, in the online version, at doi:10.1016/j.eswa.2020.113396.
O(N2 ), O(N2 ), respectively. This analysis shows the limitation of the
ALGSA is terms of the time complexity. References

6. Conclusion Alba, E., & Dorronsoro, B. (2005). The exploration/exploitation tradeoff in dynamic
cellular genetic algorithms. IEEE Transactions on Evolutionary Computation, 9(2),
This paper proposes an aggregative learning gravitational search 126–142.
Antonio, L. M., & Coello, C. A. C. (2017). Coevolutionary multiobjective evolution-
algorithm (ALGSA) with a self-adaptive gravitational constant. The ary algorithms: Survey of the state-of-the-art. IEEE Transactions on Evolutionary
self-adaptive gravitation constant establishes the situation in which Computation, 22(6), 851–865.
each individual possesses its own gravitation constant. Each indi- Askarzadeh, A. (2016). A novel metaheuristic method for solving constrained engi-
neering optimization problems: Crow search algorithm. Computers & Structures,
vidual adjusts its gravitational constant according to its search con- 169, 1–12.
dition to enhance its search performance. The self-adaptive strat- Awad, N., Ali, M., Liang, J., Qu, B., & Suganthan, P. (2016). Problem definitions and
egy verified that the search information is useful in adjusting the evaluation criteria for the CEC 2017 special session and competition on single
objective real-parameter numerical optimization. Tech. Rep..
parameters of algorithms. The aggregative learning uses the Kbest Chang, W.-D. (2015). A modified particle swarm optimization with multiple subpop-
to create several gravitational fields. Then, these fields attract the ulations for multimodal function optimization problems. Applied Soft Computing,
individual to further search the solution space. Aggregative learn- 33, 170–182.
Chen, W., Sun, J., Gao, S., Cheng, J.-J., Wang, J., & Todo, Y. (2017). Using a single
ing improves the gravitational force of individuals to avoid pre-
dendritic neuron to forecast tourist arrivals to japan. IEICE Transactions on Infor-
mature convergence. The aggregative learning strategy imply that mation and Systems, 100(1), 190–202.
the interaction manner among individuals can influence the search Das, S., & Suganthan, P. N. (2010). Problem definitions and evaluation criteria for
CEC 2011 competition on testing evolutionary algorithms on real world optimiza-
performance of algorithms and the aggregative learning is an effec-
tion problems (pp. 341–359). Kolkata: Jadavpur University, Nanyang Technologi-
tive manner. The parameter analysis of the ALGSA is executed to cal University.
determine the reasonable parameter setting. The comparison be- De Castro, L. N., & Timmis, J. (2002). An artificial immune network for multimodal
tween the ALGSA and several GSA variants verifies the effective function optimization. In Proceedings of the 2002 congress on evolutionary com-
putation. CEC’02 (cat. no. 02th8600): 1 (pp. 699–704). IEEE.
search performance of the ALGSA on twenty-nine benchmark func- Dhumane, A. V., & Prasad, R. S. (2019). Multi-objective fractional gravitational search
tions. The results indicate that ALGSA is a superior algorithm in algorithm for energy efficient routing in IoT. Wireless Networks, 25(1), 399–413.
18 Z. Lei, S. Gao and S. Gupta et al. / Expert Systems With Applications 152 (2020) 113396

Doğan, B., & Ölmez, T. (2015). A new metaheuristic for numerical function optimiza- Mirjalili, S., & Lewis, A. (2014). Adaptive gbest-guided gravitational search algorithm.
tion: Vortex search algorithm. Information Sciences, 293, 125–145. Neural Computing and Applications, 25(7-8), 1569–1584.
Doraghinejad, M., & Nezamabadi-pour, H. (2014). Black hole: A new operator for Mirjalili, S., & Lewis, A. (2016). The whale optimization algorithm. Advances in Engi-
gravitational search algorithm. International Journal of Computational Intelligence neering Software, 95, 51–67.
Systems, 7(5), 809–826. Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Grey wolf optimizer. Advances in En-
Gao, K., Cao, Z., Zhang, L., Chen, Z., Han, Y., & Pan, Q. (2019). A review on swarm gineering Software, 69, 46–61.
intelligence and evolutionary algorithms for solving flexible job shop scheduling Mittal, N., Singh, U., & Sohi, B. S. (2016). Modified grey wolf optimizer for global
problems. IEEE/CAA Journal of Automatica Sinica, 6(4), 904–916. engineering optimization. Applied Computational Intelligence and Soft Computing,
Gao, S., Vairappan, C., Wang, Y., Cao, Q., & Tang, Z. (2014). Gravitational search algo- 2016, 8.
rithm combined with chaos for unconstrained numerical optimization. Applied Nobahar, H., Nikusokhan, M., & Siarry, P. (2012). A multi-objective gravitational
Mathematics and Computation, 231, 48–62. search algorithm based on non-dominated sorting. International Journal of
Gao, S., Wang, Y., Cheng, J., Inazumi, Y., & Tang, Z. (2016). Ant colony optimization Swarm Intelligence Research, 3(3), 32–49.
with clustering for solving the dynamic location routing problem. Applied Math- Olivas, F., Valdez, F., Melin, P., Sombra, A., & Castillo, O. (2019). Interval type-2 fuzzy
ematics and Computation, 285, 149–173. logic for dynamic parameter adaptation in a modified gravitational search algo-
Gao, S., Zhou, M., Wang, Y., Cheng, J., Yachi, H., & Wang, J. (2018). Dendritic neu- rithm. Information Sciences, 476, 159–175.
ron model with effective learning algorithms for classification, approximation, Rashedi, E., Nezamabadi-Pour, H., & Saryazdi, S. (2009). GSA: A gravitational search
and prediction. IEEE Transactions on Neural Networks and Learning Systems, 30(2), algorithm. Information Sciences, 179(13), 2232–2248.
601–614. Rashedi, E., Nezamabadi-Pour, H., & Saryazdi, S. (2010). BGSA: Binary gravitational
Giacobini, M., Preuss, M., & Tomassini, M. (2006). Effects of scale-free and small– search algorithm. Natural Computing, 9(3), 727–745.
world topologies on binary coded self-adaptive CEA. In European conference on Roy, P., Mahapatra, G. S., & Dey, K. N. (2019). Forecasting of software reliability using
evolutionary computation in combinatorial optimization (pp. 86–98). Springer. neighborhood fuzzy particle swarm optimization based novel neural network.
Gong, Y.-J., Chen, W.-N., Zhan, Z.-H., Zhang, J., Li, Y., Zhang, Q., & Li, J.-J. (2015). IEEE/CAA Journal of Automatica Sinica, 6(6), 1365–1383.
Distributed evolutionary algorithms and their models: A survey of the Sarafrazi, S., Nezamabadi-Pour, H., & Saryazdi, S. (2011). Disruption: A new operator
state-of-the-art. Applied Soft Computing, 34, 286–300. in gravitational search algorithm. Scientia Iranica, 18(3), 539–548.
González, B., Valdez, F., Melin, P., & Prado-Arechiga, G. (2015). Fuzzy logic in the Sarafrazi, S., Nezamabadi-pour, H., & Seydnejad, S. R. (2015). A novel hybrid algo-
gravitational search algorithm enhanced using fuzzy logic with dynamic alpha rithm of GSA with kepler algorithm for numerical optimization. Journal of King
parameter value adaptation for the optimization of modular neural networks in Saud University-Computer and Information Sciences, 27(3), 288–296.
echocardiogram recognition. Applied Soft Computing, 37, 245–254. Shamsudin, H. C., Irawan, A., Ibrahim, Z., Abidin, A. F. Z., Wahyudi, S.,
Gu, B., & Pan, F. (2013). Modified gravitational search algorithm with particle mem- Rahim, M. A. A., & Khalil, K. (2012). A fast discrete gravitational search al-
ory ability and its application. International Journal of Innovative Computing, In- gorithm. In Computational intelligence, modelling and simulation (CIMSIM), 2012
formation and Control, 9(11), 4531–4544. fourth international conference on (pp. 24–28). IEEE.
Gu, W., Yu, Y., & Hu, W. (2017). Artificial bee colony algorithmbased parameter es- Sombra, A., Valdez, F., Melin, P., & Castillo, O. (2013). A new gravitational search
timation of fractional-order chaotic system with time delay. IEEE/CAA Journal of algorithm using fuzzy logic to parameter adaptation. In 2013 IEEE congress on
Automatica Sinica, 4(1), 107–113. evolutionary computation (pp. 1068–1074). IEEE.
Güvenç, U., & Katırcıoğlu, F. (2017). Escape velocity: A new operator for gravitational Song, X., Huang, Y., Yan, H., Xiong, Y., & Min, S. (2016). A novel algorithm for spectral
search algorithm. In Neural computing and applications (pp. 1–16). interval combination optimization. Analytica Chimica Acta, 948, 19–29.
Holland, J. H. (1992). Genetic algorithms. Scientific American, 267(1), 66–73. Song, Z., Gao, S., Yu, Y., Sun, J., & Todo, Y. (2017). Multiple chaos embedded gravi-
Ji, J., Gao, S., Wang, S., Tang, Y., Yu, H., & Todo, Y. (2017). Self-adaptive gravita- tational search algorithm. IEICE Transactions on Information and Systems, 100(4),
tional search algorithm with a modified chaotic local search. IEEE Access, 5, 888–900.
17881–17895. Storn, R. (1996). On the usage of differential evolution for function optimization. In
Ji, J., Song, S., Tang, C., Gao, S., Tang, Z., & Todo, Y. (2019). An artificial bee Proceedings of north american fuzzy information processing (pp. 519–523). IEEE.
colony algorithm search guided by scale-free networks. Information Sciences, Tang, K., Yang, P., & Yao, X. (2016). Negatively correlated search. IEEE Journal on Se-
473, 142–165. lected Areas in Communications, 34(3), 542–550.
Jiang, T., Gao, S., Wang, D., Ji, J., Todo, Y., & Tang, Z. (2017). A neuron model with Tang, Y., Ji, J., Gao, S., Dai, H., Yu, Y., & Todo, Y. (2018). A pruning neural net-
synaptic nonlinearities in a dendritic tree for liver disorders. IEEJ Transactions work model in credit classification analysis. Computational Intelligence and Neu-
on Electrical and Electronic Engineering, 12(1), 105–115. roscience, 2018.
Khatibinia, M., & Khosravi, S. (2014). A hybrid approach based on an improved grav- Valdez, F., Melin, P., & Castillo, O. (2014). A survey on nature-inspired optimization
itational search algorithm and orthogonal crossover for optimal shape design of algorithms with fuzzy logic for dynamic parameter adaptation. Expert Systems
concrete gravity dams. Applied Soft Computing, 16, 223–233. with Applications, 41(14), 6459–6466.
Lee, B., Yeon, P., & Ghovanloo, M. (2016). A multicycle q-modulation for dynamic Wang, J., & Kumbasar, T. (2019). Parameter optimization of interval type-2 fuzzy
optimization of inductive links. IEEE Transactions on Industrial Electronics, 63(8), neural networks based on PSO and BBBC methods. IEEE/CAA Journal of Automat-
5091–5100. ica Sinica, 6(1), 247–257.
Li, B., Li, J., Tang, K., & Yao, X. (2015). Many-objective evolutionary algorithms: A Wang, Y., Gao, S., Yu, Y., Wang, Z., Cheng, J., & Yuki, T. (2020). A gravitational search
survey. ACM Computing Surveys (CSUR), 48(1), 13. algorithm with chaotic neural oscillators. IEEE Access, 8, 25938–25948.
Li, C., Zhang, N., Lai, X., Zhou, J., & Xu, Y. (2017). Design of a fractional-order pid con- Wang, Y., Yu, Y., Gao, S., Pan, H., & Yang, G. (2019). A hierarchical gravitational search
troller for a pumped storage unit using a gravitational search algorithm based algorithm with an effective gravitational constant. Swarm and Evolutionary Com-
on the cauchy and gaussian mutation. Information Sciences, 396, 162–181. putation, 46, 118–139.
Li, L.-L., Lin, G.-Q., Tseng, M.-L., Tan, K., & Lim, M. K. (2018). A maximum power Yu, Y., Gao, S., Wang, Y., & Todo, Y. (2019). Global optimum-based search differential
point tracking method for PV system with improved gravitational search algo- evolution. IEEE/CAA Journal of Automatica Sinica, 6(2), 379–394.
rithm. Applied Soft Computing, 65, 333–348. Zhang, A., Sun, G., Ren, J., Li, X., Wang, Z., & Jia, X. (2016). A dynamic neighborhood
Mavrovouniotis, M., Li, C., & Yang, S. (2017). A survey of swarm intelligence for dy- learning-based gravitational search algorithm. IEEE Transactions on Cybernetics,
namic optimization: Algorithms and applications. Swarm and Evolutionary Com- 48(1), 436–447.
putation, 33, 1–17. Zhou, T., Gao, S., Wang, J., Chu, C., Todo, Y., & Tang, Z. (2016). Financial time se-
Mirjalili, S. (2016). SCA: A sine cosine algorithm for solving optimization problems. ries prediction using a dendritic neuron model. Knowledge-Based Systems, 105,
Knowledge-Based Systems, 96, 120–133. 214–224.
Mirjalili, S., & Hashim, S. Z. M. (2010). A new hybrid PSOGSA algorithm for func-
tion optimization. In 2010 international conference on computer and information
application (pp. 374–377). IEEE.

You might also like