You are on page 1of 19

Applied Mathematics and Computation 271 (2015) 269–287

Contents lists available at ScienceDirect

Applied Mathematics and Computation


journal homepage: www.elsevier.com/locate/amc

Artificial bee colony algorithm with multiple search strategies


Wei-feng Gao a,∗, Ling-ling Huang a, San-yang Liu b, Felix T.S. Chan c, Cai Dai d,
Xian Shan a
a
School of Science, China University of Petroleum, Qingdao, 266580, China
b
School of Mathematics and Statistics, Xidian University, Xi’an, Shannxi, 710071, China
c
Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hung Hom, Hong Kong
d
College of Computer Science, Shaanxi Normal University, Xi’an, 710062, China

a r t i c l e i n f o a b s t r a c t

Keywords: Considering that the solution search equation of artificial bee colony (ABC) algorithm does
Evolutionary algorithms well in exploration but badly in exploitation which results in slow convergence, this paper
Artificial bee colony algorithm
studies whether the performance of ABC can be improved by combining different search
Strategy candidate pool
strategies, which have distinct advantages. Based on this consideration, we develop a novel
Gaussian distribution
Search equation ABC with multiple search strategies, named MuABC. MuABC uses three search strategies to
constitute a strategy candidate pool. In order to further improve the performance of the al-
gorithm, an adaptive selection mechanism is used to choose suitable search strategies to gen-
erate candidate solutions based on the previous search experience. In addition, a candidate
solution is generated based on a Gaussian distribution to exploit the search ability. MuABC is
tested on a set of 22 benchmark functions, and is compared with some other ABCs and sev-
eral state-of-the-art algorithms. The comparison results show that the proposed algorithm
offers the highest solution quality, the fastest global convergence, and the strongest robust-
ness among all the contenders on almost all the cases.
© 2015 Elsevier Inc. All rights reserved.

1. Introduction

Since various science and engineering fields involve the complex optimization problems, optimization techniques are very
important for researchers and have always been a hot spot. General speaking, a minimization problem can be expressed by the
following form

Minimize f (X ), sub ject to X ∈ , (1.1)

where X = [x1 , x2 , . . . , xD ] is a D-dimensional vector of decision variables in the feasible region . Traditional optimization al-
gorithms such as steepest decent, conjugate gradient method, Newton method, etc., generally fail to handle the multimodal,
non-convex, non-differentiable or discontinuous optimization problems. For instance, most of traditional methods require gra-
dient information and hence it is impossible for them to deal with the non-differentiable functions. Furthermore, they often
easily fall into local optima when dealing with complex multimodal functions. Therefore, it is essential to develop more practical
optimization techniques.


Corresponding author. Tel.: +86053286983537.
E-mail address: gaoweifeng2004@126.com, upcgwf@sina.cn (W.-f. Gao).

http://dx.doi.org/10.1016/j.amc.2015.09.019
0096-3003/© 2015 Elsevier Inc. All rights reserved.
270 W.-f. Gao et al. / Applied Mathematics and Computation 271 (2015) 269–287

In the past few decades, evolutionary algorithms (EAs) have achieved considerable success in handling these complex prob-
lems as they do not depend on the differentiability, continuity, and convexity of the objective function. And they have attracted
more and more attention. In the family of EAs, the most popular methods are genetic algorithms (GA) [1], differential evolution
(DE) [2], particle swarm optimization (PSO) [3], biogeography-based optimization (BBO) [4], ant colony optimization (ACO) [5],
and artificial bee colony (ABC) algorithm [6].
Concretely speaking, this paper focuses on ABC, developed by Karaboga [6] based on simulating the intelligent foraging be-
havior of honey bees. The simulation results indicate that ABC is superior to or at least comparable to GA, DE, and PSO [7–10].
Due to its simple structure, easy implementation and outstanding performance, ABC has received growing interest and has been
successfully applied to solve many real-world optimization problems [11–13] since its invention.
However, similar with other EAs, ABC also faces slow convergence. It is because the search equation of ABC does well in
exploration but badly in exploitation [25]. For the sake of an exquisite balance between the exploration and the exploitation,
many ABC variants have been proposed to improve the performance of ABC by hybrid ABC with other operations [14–24]. For
example, Karaboga and Basturk [14] developed a modified version of ABC which employs the frequency of perturbation and the
ratio of the variance operation. Kang et al. [15] integrated Nelder–Mead simplex method into ABC and proposed a hybrid ABC.
Alatas [17] proposed a chaotic ABC by introducing the chaotic map into the initialization and the scouts phase. Xiang and An [22]
reported an improved ABC by employing a chaotic search technique, a reverse selection and a combinatorial solution search. Gao
et al. [24] suggested a general framework to improve the search ability of ABC by using the orthogonal learning strategy.
Many attempts have also been developed to improve the search ability of ABC by the modified search equations [24–30].
For example, motivated by PSO, Zhu and Kwong [25] proposed a gbest-guided ABC (GABC) which makes use of the information
of global best solution to improve the exploitation. Li et al. [27] introduced an inertia weight and two acceleration coefficients,
and developed a modified ABC. Drawing inspiration from DE, Gao et al. [30] designed two modified solution search equations,
named ABCbest. Gao et al. [24] proposed a novel solution search equation like the crossover operation of GA, named CABC. The
experimental results show the modified search equation performs effectively. The study is not limited to the above two aspects
and more work can be seen in [10].
It has been clear that some search strategies suit the global exploration and some other strategies can speed up the conver-
gence. Without question, these experiences are very useful for improving the performance of ABC. However, it has been observed
that these experiences have not been systematically utilized to design a new ABC variant. This motivates us to research whether
the performance of ABC can be improved by combining several different search strategies, which have different advantages iden-
tified by other researchers’ works. Our work along this line develops a novel ABC with multiple search strategies, named MuABC.
This presented approach combines three search strategies by an adaptive selection mechanism to produce candidate solutions. In
addition, a Gaussian distribution is introduced to the three search strategies to improve the search ability. MuABC also preserves
the good characteristics of the original ABC, such as simple structure, easy implementation, and so on. The comparison results,
on a set of benchmark functions, denote that MuABC performs competitively and effectively when compared to the selected
state-of-the-art algorithms.
The rest of this paper is organized as follows. Section 2 reviews ABC. The presented approach is introduced in Section 3. The
comparison results are presented and discussed in Section 4. Finally, the conclusion is drawn in Section 5.

2. The original ABC

ABC, proposed by Karaboga [6], is a newly proposed optimization algorithm which simulates the intelligent foraging behav-
ior of honey bee swarms. In ABC, a colony involves three different classes of bees: employed bees, onlookers, and scouts. The
framework of ABC is shown in Fig. 1.
The population of ABC consists of SND-dimensional vectors of decision variables

Xi = {xi,1 , xi,2 , . . . , xi,D }, i = 1, 2, . . . , SN. (2.1)

In the beginning, each Xi which is defined by lower and upper bounds Xmin and Xmax , is generated by Eq. (2.2).

xi, j = xmin, j + rand(0, 1)(xmax, j − xmin, j ), (2.2)

where i = 1, 2, . . . , SN, j = 1, 2, . . . , D.
In the employed bee phase, with respect to each individual Xi , a candidate Vi is produced by adding the scaling difference of
two population members to the base individual, i.e.

vi, j = xi, j + φi, j (xi, j − xk, j ), (2.3)

where k ∈ {1, 2, . . . , SN} and j ∈ {1, 2, . . . , D} are randomly chosen indexes; k has to be different from i, and φ ij is a random
number in the range [−1, 1]. The selection operation is performed to select the better one from the old individual Xi and the
candidate Vi .
After all employed bees finish their search, they will share the information with onlookers. Each onlooker chooses a solution
depending on the probability value pi of the corresponding solution as follows


SN
pi = f iti f it j , (2.4)
j=1
W.-f. Gao et al. / Applied Mathematics and Computation 271 (2015) 269–287 271

Fig. 1. The framework of ABC.

where fiti is the fitness value of the ith solution. Then, each onlooker generates a new solution by Eq. (2.3) and the greedy
selection is applied again as in the case of the employed bee.
If the quality of an individual can not be improved beyond a predetermined number of cycles, then the individual is aban-
doned. The value of the predetermined number of cycles is an important control parameter for ABC, named limit. If Xi is an
abandoned individual, then the scout generates a new solution by Eq. (2.2) to replace Xi .
It should be noted that the elements of the candidate Vi may violate the predefined boundary constraints. Possible solutions
to tackle this problem include resetting schemes, penalty schemes, etc. Constrained problems, however, are not the main focus
of this paper. Thus, we follow a simple method [40] which sets the violating elements to be the middle of the violated bounds
and the corresponding elements of the old one Xi , i.e.,

(xmin, j + ui, j )/2, if vi, j < xmin, j
vi, j = (2.5)
(xmax, j + ui, j )/2, if vi, j > xmax, j
where j = 1, . . . , D.

3. MuABC

The characteristics of the search strategies of ABCs have been comprehensively studied in [24–30], and some prior knowledge
has been gained in the past years. For example, ABC adopting different search strategies usually performs differently when
solving the optimization problems. Such prior knowledge could be used for designing more effective and robust ABC variants.
It has been observed that in most ABC variants, only one search strategy is adopted at each generation. As a consequence, the
search ability of these algorithms may be limited.
Based on the above considerations, this paper develops a novel approach, named MuABC, the major idea of which is to com-
bine several search strategies by the adaptive selection mechanism at each generation to produce new candidate solutions.
Generally speaking, we expect that the selected search strategies have distinct advantages and, hence, they can be effectively
combined to deal with different kinds of the optimization problems. In this paper, we choose three search strategies to consist of
a strategy candidate pool. The three search strategies are introduced in the next section.
272 W.-f. Gao et al. / Applied Mathematics and Computation 271 (2015) 269–287

Fig. 2. The framework of MuABC.

At each generation, the three search strategies are selected by the adaptive selection mechanism from the strategy candidate
pool and are used to generate candidate solutions. Thus, the three candidate solutions are produced for each individual. Then,
the best candidate solution enters the next generation if it is superior to the old solution. The pseudocode of MuABC is presented
in Fig. 2.
Next, we discuss the properties of the strategy candidate pool and the adaptive selection mechanism.

3.1. Strategy candidate pool

The search strategy (see Eq. (2.3)) plays an important role in determining the performance of the original ABC. Generally,
the different kinds of optimization problems require the different search strategies depending on the properties of problems. To
solve a specific problem, the different search strategies may be better during the different stages of the evolution, than a single
search strategy as in the original ABC. Multiple search strategies technique is considered to be a very efficient and effective way
to obtain a progress. However, the main challenge is how to assign search strategies for the strategy candidate pool. In general,
an ideal candidate pool should be restrictive so that the negative influences of the less effective strategies can be suppressed.
In other words, the search strategies in the candidate pool should have diverse characteristics, so that they can exhibit distinct
performance characteristics during the different stages of the evolution.
Based on the above analysis, the three search strategies used in the paper are shown as follows:
x + x 
, | xi, j − xr1, j | ,
i, j r1, j
vi_1, j = N (3.1)
2
x + xr2, j

, | xr1, j − xr2, j | ,
r1, j
vi_2, j = N (3.2)
2
W.-f. Gao et al. / Applied Mathematics and Computation 271 (2015) 269–287 273

x + xmean, j

, | xbest, j − xmean, j | ,
best, j
vi_3, j = N (3.3)
2
where N denotes a Gaussian distribution; j ∈ {1, 2, . . . , D} is a randomly chosen index; xbest, j is the jth element of the best indi-
vidual in the current population; r1 and r2 are distinct integers randomly selected from the range [1, SN] and are also different
from i; xmean, j is the jth element of the arithmetic average of all the individuals in the current population.
Eqs. (3.1) and (3.2) are the modified versions of the search equations of the original ABC and CABC [24], respectively. Espe-
cially, in Eq. (3.1), the individual Xr1 is selected from the population randomly and, consequently, it has no bias to any special
search direction and chooses a new search direction in a random manner. It usually has stronger exploration. Therefore, it is
usually more suitable for solving the multimodal problems. In Eq. (3.2), the two different individuals for generating a candidate
solution are both selected from the population randomly, which can bring more information to the search equation and produce
a more promising candidate solution to improve the performance of the algorithm than the search equation Eq. (3.1). What’s
more, the Gaussian distribution is embedded to Eqs. (3.1) and (3.2) which can enlarge the search range and improve the search
ability.
While, for Eq. (3.3), on the one hand, it can utilize the information of the best individual found so far to enhance the ex-
ploitation ability. Thus, it usually has fast convergence speed and performs well when solving the unimodal problems. On
the other hand, since the Gaussian distribution is employed and xmean, j is exploited, it may maintain the population diver-
sity. In other words, it can improve the ability to exploit the search region without losing the population diversity. For this
reason, we develop this strategy in the paper. In a word, the characteristics of the three search strategies are summarized as
follows.
The characteristics of the three search strategies
Eq. (3.1) Suitable for global search
Eq. (3.2) Has a balance between global and local search
Eq. (3.3) Suitable for local search without losing the population diversity

From the characteristics of the three search strategies, it can be seen that the three search strategies have different features,
that is to say, the used search strategies can show distinct abilities when handling a specific optimization problem at the different
stages of the search process. In this way, such a design may cater a fine candidate pool should contain various properties.

3.2. Adaptive selection mechanism

In order to further learn good search information from the evolution process, MuABC adopts the adaptive selection mecha-
nism in [31] to select the search strategies for each individual based on the previous search experience. In other words, if one
search strategy performs better in the previous search, it will have more chance to be used in the subsequent search. Here, nsk, G
denotes the number of candidate solutions produced by the kth (k = 1, 2, and 3) search strategy which can successfully enter
the next generation at the generation G. nfk, G represents the number of candidate solutions produced by the kth (k = 1, 2, and 3)
search strategy which are discarded in the next generation at the generation G.
This mechanism needs another parameter, i.e., learning period (LP). In the first LP generations, each individual selects the
three search strategies with the same probability (i.e., 1/3). After the first LP generations, the probability Pk, G of employing the
kth search strategy is updated as follows:
Sk,G
Pk,G = 3 , (3.4)
k=1 Sk,G

where
G−1
nsk,G
+ ε,
g=G−LP
Sk,G = G−1 G−1 (3.5)
g=G−LP nsk,G + g=G−LP n fk,G
where G > LP, k = 1, 2, 3, Sk, G refers to the success rate of candidate solutions produced by the kth search strategy successfully
entering the next generation within the previous LP generations with respect to generation G. ε is a small tolerance value. A large
value of ε may result in that the value of Sk, G is far too large, which should be avoided. In contrast, a low value of ε may result
in that Sk, G is also very small in some cases, which makes the corresponding search strategy can not enter the next generation
during a long period. According to the analysis above, ε is set to 0.01 in our experiments which is the same to [31]. The roulette
wheel selection is used to select the three search strategies for each individual based on Eq. (3.4). Clearly, the larger Sk, G within
the previous LP generations, the larger probability Pk, G of applying the corresponding search strategy to generate candidate
solutions at the current generation. Following the same setting in [31], LP is set to 50 in this paper.

4. Experimental studies on function optimization problems

4.1. Experimental settings

In order to test the performance of MuABC, a set of 20 scalable benchmark functions with D = 15, 30 or 60 [9,25,32,34] and
a set of 2 functions with D = 50, 100 or 200 [32,34] are used, as shown in Table 1. For convenience, according to the difference
274 W.-f. Gao et al. / Applied Mathematics and Computation 271 (2015) 269–287

Table 1
Benchmark functions used in experiments.

Name Function Search range Accept


n
Sphere f1 (X ) = i=1 x2i [−100, 100]n 1 × 10−8
n
f2 (X ) = (106 ) n−1 x2i
i−1
Elliptic [−100, 100]n 1 × 10−8
i=1
SumSquare f3 (X ) = n
ix2i [−10, 10]n 1 × 10−8
i=1
SumPower f4 (X ) = |xi |(i+1)
n
[−1, 1]n 1 × 10−8
i=1 
Schwefel 2.22 f5 (X ) = |xi | + ni=1 |xi |
n
i=1 [−10, 10]n 1 × 10−8
Schwefel 2.21 f6 (X ) = max{|xi |, 1 ≤ i ≤ n} [−100, 100]n 4 × 101
in
Step f7 (X ) = i=1 (xi + 0.5) 2
[−100, 100] n
1 × 10−8

Exponential f8 (X ) = exp(0.5 ∗ ni=1 xi ) − 1 [−1.28, 1.28]n 1 × 10−8

Quartic f9 (X ) = ni=1 ix4i + random[0, 1) [−1.28, 1.28]n 1 × 10−1

Rosebrock f10 (X ) = n−1 [100(xi+1 − x2i )2 + (xi − 1)2 ] [−5, 10]n 5 × 100
i=1
Rastrigin f11 (X ) = ni=1 [x2i − 10 cos (2π xi ) + 10] [−5.12, 5.12]n 1 × 10−8

NCRastrigin f12 (X ) = ni=1 [y2i − 10 cos (2π yi ) + 10] [−5.12, 5.12]n 1 × 10−8

xi |xi | < 1
yi = round(2xi )
2

2
| |
xi ≥ 1
n 22 n
Griewank f13 (X ) = 1
i=1 xicos ( √xi ) + 1
− [−600, 600]n 1 × 10−8
4,000 i

i=1

Schwefel 2.26 f14 (X ) = 418.98288727243369 ∗ n − ni=1 xi sin ( |xi |) [−500, 500] n
1 × 10−8
1 n 
Ackley f15 (X ) = −20 exp ( − 0.2 n i=1 x2i ) − exp ( 1n ni=1 cos (2π xi )) [−32, 32]n 1 × 10−8
+20 + e

f16 (X ) = πn {10 sin (π y1 ) + n−1
i=1 (yi − 1) [1 + 10 sin (π yi+1 )]
2 2
Penalized1 2
[−50, 50]n 1 × 10−8
n
+(yn − 1) } + i=1 u(xi , 10, 100, 4)
2

yi = 1 + 1
4
(xi + 1)

⎨k(xi − a)m xi > a
uxi ,a,k,m = 0a ≤ xi ≤ a
⎩k( − x − a)m x < −a
i i

f17 (X ) = 10 {sin (π x1 ) + n−1 i=1 (xi − 1) [1 + sin (3π xi+1 )]+
2 2
Penalized2 1 2
[−50, 50]n 1 × 10−8

(xn − 1)2 [1 + sin2 (2π xi+1 )]} + ni=1 u(xi , 5, 100, 4)

Alpine f18 (X ) = ni=1 |xi · sin (xi ) + 0.1 · xi | [−10, 10]n 1 × 10−8

f19 (X ) = n−1i=1 (xi − 1) [1 + sin (3π xi+1 )] + sin (3π x1 )
2 2
Levy 2
[−10, 10]n 1 × 10−8
+|xn − 1|[1 + sin (3π xn )]
2
  max k  max k
Weierstrass f20 (X ) = Di=1 ( kk=0 [a cos(2π bk (xi + 0.5))]) − D kk=0 [a [−0.5, 0.5]n 1 × 10−8
cos (2π b 0.5)], a = 0.5, b = 3, kmax = 20
k

Himmelblau f21 (X ) = 1n ni=1 (x4i − 16x2i + 5xi ) [−5, 5]n −78
 20 i×x2
Michalewicz f22 (X ) = − ni=1 sin (xi ) sin ( π i ) [0, π ]n −49, −95, −190

of dimension, these test functions are classed into the low-, middle- and high-dimensional functions. For instance, the high-
dimensional functions involve the functions f1 − f20 with D = 60 , and the functions f21 and f22 with D = 200 , etc.
Outlined in Table 1 are the 22 scalable benchmark functions. These functions involve different types of problems such as the
unimodal functions f1 − f6 and f8 , the discontinuous step function f7 , the noisy quartic function f9 , and the multimodal functions
f11 − f22 whose local optima increase exponentially with the dimension of the problem. Particularly, the Rosenbrock function
f10 is unimodal for D = 2 and 3 but may have multiple optima in high dimensional cases [33]. Table 1 also presents the search
range (column 3). Moreover, “Accept” (column 4) is defined for each function. An experiment is considered successful if the best
solution is found with sufficient accuracy - “Accept”.
To make a fair comparison, all the algorithms run 50 times independently and are stopped when the maximum number of
50,000, 100,000 and 200,000 function evaluations (FES) in the cases of the low-, middle- and high-dimensional functions is
reached in each run. The population size and limit in MuABC are set to 40 [27] (i.e., there are 20 employed bees and 20 onlooker
bees) and 200 [14,27], respectively.

4.2. Comparisons among ABCs

MuABC is compared to the four selected state-of-the-art ABC variants, i.e., ABC [8], GABC [25], ABCbest [30], and CABC [24].
In our experiments, these four contenders follow the parameter settings in their original papers. The mean and the standard
deviation of the results obtained by each algorithm for f1 − f22 are summarized in Tables 2–4. In addition, in order to show
the significant differences between two algorithms, the Mann–Whitney–Wilcoxon rank sum test at 5% significance level is also
conducted. The result of the test is represented as “+/=/−”, which means that MuABC is significantly better than, equal to, and
worse than the compared algorithm, respectively.
An interesting result we notice is that the five ABC variants have most reliably found the minimum of f7 . The optimal solutions
of f7 are the region rather than a point. Hence, this problem may relatively be easy to solve with a 100% success rate. Some
significant conclusions on the quality of the solution found by each algorithm can be drawn from the results in Tables 2–4 on
Table 2
Result comparisons of ABCs on 15-dimensional functions f1 − f20 , and 50-dimensional functions f21 and f22 .

W.-f. Gao et al. / Applied Mathematics and Computation 271 (2015) 269–287
Fun f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11

+ + + + + + = + + +
ABC Mean 6.80e-14 8.97e-10 3.31e-15 7.69e-22 4.58e-08 2.74e-00 0 2.84e-16 4.92e-02 1.68e-01 2.90e-07+
SD 8.74e-14 1.23e-09 2.42e-15 2.18e-21 1.98e-08 7.38e-01 0 1.01e-16 1.71e-02 2.04e-01 8.64e-07
GABC Mean 5.50e-24+ 2.75e-20+ 4.21e-25+ 6.78e-34+ 4.73e-13+ 6.38e-01+ 0= 2.30e-16+ 2.26e-02+ 9.35e-01+ 3.06e-17+
SD 4.06e-24 3.35e-20 3.53e-25 1.46e-33 1.56e-13 1.75e-01 0 4.44e-17 8.00e-03 1.87e-00 1.26e-16
ABCbest Mean 1.27e-34+ 2.59e-31+ 3.17e-36+ 2.00e-56+ 3.92e-19+ 2.06e-01+ 0= 3.44e-17= 6.70e-03+ 2.48e-00+ 0=
SD 1.03e-34 2.54e-31 2.48e-36 4.43e-56 1.04e-19 5.93e-02 0 9.93e-17 1.91e-03 4.60e-00 0
CABC Mean 4.26e-36+ 2.32e-32+ 3.51e-37+ 7.98e-50+ 3.11e-19+ 3.91e-01+ 0= 3.55e-17= 1.62e-02+ 1.58e-01+ 0=
SD 4.76e-36 3.03e-32 6.22e-37 2.35e-49 3.27e-19 1.06e-01 0 8.30e-17 7.26e-03 1.37e-01 0
MuABC Mean 1.34e-51 2.47e-48 9.77e-53 4.81e-73 7.36e-28 3.81e-03 0 2.40e-17 4.82e-03 2.12e-02 0
SD 1.75e-51 2.43e-48 5.81e-53 7.81e-73 5.39e-28 1.15e-03 0 1.28e-17 1.25e-03 6.35e-02 0
Fun f12 f13 f14 f15 f16 f17 f18 f19 f20 f21 f22
ABC Mean 1.10e-05+ 1.69e-05+ 1.36e-00+ 1.09e-06+ 7.33e-15+ 3.49e-14+ 7.63e-06+ 4.57e-10+ 9.54e-05+ −78.2731+ −44.6739+
SD 4.11e-05 4.88e-05 5.44e-00 5.96e-07 9.57e-15 2.63e-14 5.25e-06 8.00e-10 2.30e-05 6.46e-02 4.02e-01
GABC Mean 7.10e-17+ 5.91e-04+ 5.82e-13+ 4.90e-12+ 2.28e-25+ 2.16e-24+ 1.56e-06+ 2.22e-16+ 2.99e-13+ −78.3322+ −45.5645+
SD 3.55e-16 2.04e-03 4.45e-13 2.18e-12 2.23e-25 1.47e-24 5.41e-06 3.59e-16 2.12e-13 1.32e-05 5.21e-01
ABCbest Mean 0= 9.34e-03+ 1.81e-12+ 1.06e-14+ 3.14e-32= 1.34e-32= 3.99e-19+ 1.34e-31= 0= −78.3323= −48.0990+
SD 0 1.41e-02 4.06e-12 2.51e-15 0 0 4.34e-19 0 0 1.66e-08 3.59e-01
CABC Mean 0= 9.39e-10+ 2.18e-13+ 1.03e-14+ 3.14e-32= 1.34e-32= 4.57e-20+ 1.34e-31= 0= −78.3323= −48.6950+
SD 0 4.69e-09 3.96e-13 3.06e-15 1.11e-47 5.58e-48 4.71e-20 2.23e-47 0 9.59e-10 2.06e-01
MuABC Mean 0 8.36e-11 6.06e-14 1.42e-15 3.14e-32 1.34e-32 3.70e-29 1.34e-31 0 −78.3323 −49.2637
SD 0 1.44e-11 5.25e-14 2.65e-15 0 0 6.41e-29 0 0 4.92e-14 7.52e-02

275
276
Table 3
Result comparisons of ABCs on 30-dimensional functions f1 − f20 , and 100-dimensional functions f21 and f22 .

W.-f. Gao et al. / Applied Mathematics and Computation 271 (2015) 269–287
Fun f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11

ABC Mean 2.02e-13+ 3.04e-09+ 4.14e-14+ 1.05e-20+ 1.43e-07+ 1.64e+01+ 0= 8.32e-16+ 1.93e-01+ 2.39e-01+ 1.55e-06+
SD 2.15e-13 2.36e-09 3.05e-14 2.48e-20 4.35e-08 2.88e-00 0 7.11e-16 5.29e-02 1.59e-01 2.87e-06
GABC Mean 1.92e-22+ 8.09e-19+ 2.09e-23+ 2.52e-34+ 3.23e-12+ 7.00e-00+ 0= 7.35e-09+ 8.94e-02+ 1.15e-00+ 1.15e-15+
SD 1.16e-22 5.81e-19 1.32e-23 5.30e-34 9.27e-13 1.01e-00 0 3.29e-08 2.42e-02 2.81e-00 3.22e-15
ABCbest Mean 5.30e-32+ 4.27e-28+ 7.33e-33+ 3.24e-57+ 2.91e-17+ 4.75e-00+ 0= 2.22e-16= 3.80e-02+ 2.53+01+ 0=
SD 3.79e-32 6.66e-28 3.18e-33 7.25e-57 1.01e-17 1.02e-00 0 7.31e-17 2.25e-03 3.27+01 0
CABC Mean 5.41e-35+ 4.67e-31+ 7.20e-36+ 3.24e-40+ 1.43e-18+ 6.01e-00+ 0= 2.22e-16= 4.98e-02+ 1.96e-01+ 0=
SD 7.02e-35 6.68e-31 7.79e-36 7.87e-40 5.68e-19 8.80e-01 0 2.31e-16 1.78e-02 1.36e-01 0
MuABC Mean 3.30e-50 3.33e-47 2.95e-51 3.30e-70 1.76e-27 2.65e-00 0 2.01e-16 2.79e-02 1.63e-01 0
SD 1.77e-50 2.63e-47 3.87e-51 5.72e-70 8.16e-28 4.91e-02 0 3.82e-17 2.51e-03 2.46e-01 0
Fun f12 f13 f14 f15 f16 f17 f18 f19 f20 f21 f22
ABC Mean 1.58e-01+ 3.39e-09+ 3.78e+01+ 2.05e-06+ 1.43e-14+ 2.02e-13+ 5.70e-05+ 1.93e-10+ 2.53e-04+ -78.16483+ −87.6022+
SD 3.69e-02 1.00e-08 1.40e-00 5.86e-07 1.60e-14 1.67e-13 6.86e-05 2.58e-10 6.82e-05 1.37e-01 6.02e-01
GABC Mean 7.01e-14+ 2.08e-03+ 2.09e-11+ 2.15e-11+ 3.07e-24+ 4.80e-23+ 5.99e-06+ 1.24e-16+ 7.83e-12+ -78.3322+ -89.2972+
SD 9.42e-14 6.54e-03 6.66e-12 1.06e-11 4.71e-24 3.80e-23 9.71e-06 1.45e-16 7.19e-12 3.66e-05 5.83e-01
ABCbest Mean 0= 6.28e-11+ 1.45e-12+ 6.41e-14+ 1.59e-32= 2.97e-32= 7.02e-17+ 6.20e-31= 1.42e-15+ -78.3323= -93.4167+
SD 0 2.71e-11 8.13e-13 3.17e-15 5.77e-34 1.75e-32 4.23e-17 4.64e-31 3.17e-15 4.50e-08 6.48e-01
CABC Mean 0= 4.93e-04+ 1.00e-12+ 4.27e-14+ 1.57e-32= 1.34e-32= 3.53e-19+ 1.34e-31= 0= -78.3323= -95.5749+
SD 0 2.20e-03 5.28e-13 3.98e-15 2.80e-48 2.80e-48 2.66e-19 0 0 1.63e-09 4.24e-01
MuABC Mean 0 4.35e-14 1.21e-13 3.19e-14 1.57e-32 1.34e-32 7.40e-28 1.34e-31 0 -78.3323 -98.6923
SD 0 7.51e-14 1.05e-13 6.15e-15 0 0 1.28e-28 0 0 1.06e-13 1.24e-01
Table 4
Result comparisons of ABCs on 60-dimensional functions f1 − f20 , and 200-dimensional functions f21 and f22 .

W.-f. Gao et al. / Applied Mathematics and Computation 271 (2015) 269–287
Fun f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11

+ + + + + + = + + +
ABC Mean 1.01e-12 1.54e-08 3.81e-13 1.69e-17 3.63e-07 5.30e+01 0 1.99e-15 6.90e-01 4.38e-01 7.85e-01+
SD 5.20e-13 9.91e-09 1.84e-13 3.55e-17 5.59e-08 6.48e-00 0 2.71e-16 1.68e-01 2.70e-01 6.19e-01
GABC Mean 2.02e-21+ 1.01e-17+ 5.92e-22+ 7.02e-29+ 1.44e-11+ 4.64e+01+ 0= 3.38e-08+ 3.63e-01+ 1.09e+01+ 1.17e-13+
SD 1.17e-21 8.21e-18 3.15e-22 1.52e-28 1.79e-12 5.32e-00 0 7.55e-08 4.33e-02 1.93e+01 1.73e-13
ABCbest Mean 1.27e-29+ 5.68e-26+ 4.39e-30+ 1.36e-58+ 5.26e-16+ 3.28e+01+ 0= 4.66e-16+ 1.63e-01+ 2.38+01+ 0=
SD 9.42e-30 5.14e-26 4.08e-30 2.75e-58 1.67e-16 3.64e-00 0 1.28e-16 2.93e-02 3.15+01 0
CABC Mean 2.63e-34+ 3.43e-30+ 1.64e-34+ 2.65e-47+ 5.19e-18+ 3.99e+01+ 0= 7.10e-16+ 1.99e-01+ 3.27e-01+ 0=
SD 2.68e-34 3.60e-30 1.45e-34 5.54e-47 2.10e-18 7.62e-00 0 9.92e-17 6.07e-02 6.37e-01 0
MuABC Mean 5.73e-49 2.76e-46 7.36e-50 2.47e-68 7.73e-27 3.13e+01 0 3.40e-17 1.16e-01 2.14e-02 0
SD 8.10e-49 1.31e-46 5.88e-50 2.56e-68 4.20e-27 2.14e-00 0 1.28e-16 5.88e-03 2.19e-02 0
Fun f12 f13 f14 f15 f16 f17 f18 f19 f20 f21 f22
ABC Mean 1.39e-00+ 6.73e-11+ 4.89e-00+ 3.93e-06+ 1.47e-14+ 5.96e-13+ 6.03e-04+ 7.72e-10+ 5.96e-04+ -77.9536+ -174.3284+
SD 9.48e-01 8.14e-11 2.23e+01 1.47e-06 8.05e-15 4.37e-13 7.68e-04 6.21e-10 5.70e-05 1.41e-01 1.14e-00
GABC Mean 2.01e-11+ 2.47e-02+ 4.51e-10+ 6.39e-11+ 1.33e-23+ 3.72e-22+ 1.52e-05+ 6.79e-16+ 1.33e-10+ -78.3322+ 176.5457+
SD 2.53e-11 3.29e-02 3.25e-10 2.32e-11 6.46e-24 1.67e-22 1.40e-05 6.18e-16 7.67e-11 8.35e-05 1.75e-01
ABCbest Mean 0= 1.79e-07+ 2.01e-10+ 9.66e-14+ 8.50e-32+ 1.68e-30+ 3.61e-14+ 9.46e-30+ 2.55e-14+ -78.3323= -181.1403+
SD 0 3.83e-07 3.25e-11 5.94e-15 4.14e-32 6.17e-31 3.96e-15 1.15e-29 6.35e-15 8.28e-08 7.73e-01
CABC Mean 0= 1.49e-08+ 1.85e-10+ 9.31e-14+ 7.85e-33= 1.34e-32= 4.51e-18+ 1.34e-31= 8.52e-15+ -78.3323= -187.3753+
SD 0 3.31e-08 3.25e-11 6.92e-15 2.79e-48 5.58e-48 8.82e-18 2.23e-47 7.78e-15 1.70e-09 8.33e-01
MuABC Mean 0 7.58e-15 5.09e-11 2.28e-14 7.85e-33 1.34e-32 7.40e-28 1.34e-31 8.36e-16 -78.3323 −196.6456
SD 0 9.77e-15 1.26e-11 3.10e-15 0 0 1.28e-29 0 8.20e-15 1.42e-14 2.84e-01

277
278 W.-f. Gao et al. / Applied Mathematics and Computation 271 (2015) 269–287

the other 21 benchmark functions. For the low-dimensional functions in Table 2, MuABC is significantly better than ABC, GABC,
ABCbest, and CABC on 21, 21, 13 and 13 cases, respectively. For the remaining cases, they perform equally, while MuABC is more
stable than the other compared ABCs on the most cases. For example, MuABC has the smaller standard deviation on f21 . From the
results of the middle-dimensional functions in Table 3, it is clear that MuABC consistently outperforms the compared algorithms
on the most cases. MuABC is significantly superior to ABC, GABC, ABCbest, and CABC on 21, 21, 14 and 13 cases, respectively, while
ABC, GABC, ABCbest, and CABC can not surpass MuABC on any case. From the high-dimensional functions in Table 4, similar to
the results for the low- and middle-dimensional functions, MuABC is significantly better than ABC, GABC, ABCbest, and CABC on
21, 20, 18, and 15 cases, respectively. For the remaining cases, they also perform equally, while MuABC has a better stability than
the other compared ABCs on the most cases.
More experimental results are conducted and reported in Tables 5–7 to compare the convergence speed and the stability of
the five algorithms. The results summarized there contain AVEN and SR%. Specifically, AVEN refers to the average FES needed to
reach the threshold defined as “Accept” shown in Table 1 and SR% refers to the number of success runs in the 50 independent
runs for each test function. To vividly describe the advantage of MuABC, the convergence graphs of some benchmark functions
are drawn in Figs. 3–4.
It can be seen from Tables 5–7, Figs. 3–4 that MuABC is faster than ABC, GABC, ABCbest and CABC on the most cases. This
may be because MuABC can construct efficient candidate solutions by combining different search strategies with a reasonable
mechanism to speed up the search and find an ideal balance between the exploration and the exploitation.
Furthermore, the successful rates reported in Tables 5–7 also demonstrate that the proposed approach is very promising to
bring in a high reliability to ABC. Particularly, MuABC has a higher reliability and its successful rate is 100% on all the cases except
the Quartic function.

4.3. Comparisons with other state-of-the-art algorithms

Furthermore, MuABC is compared against some EAs variants. These algorithms are listed as follows:
(1) ALEP [35]: evolutionary programming with adaptive Levy mutation;
(2) FEP [36]: fast evolutionary programming with Cauchy mutation;
(3) OGA/Q [32]: orthogonal genetic algorithm with quantization;
(4) EDA /L [37]: estimation of distribution algorithm with local search;
(5) CEP [38]: conventional evolutionary programming;
(6) LEA [34]: EA based on level-set evolution and Latin squares;
(7) DE [2]: original DE;
(8) jDE [39]: self-adapting control parameters in DE;
(9) SaDE [31]: DE with strategy adaptation;
(10) JADE [40]: adaptive DE with optional external archive;
(11) PSO [3]: original PSO;
(12) FIPS [43]: “fully informed” PSO;
(13) HPSO–TVAC [45]: self-organizing hierarchical PSO with time-varying acceleration coefficients;
(14) CLPSO [44]: comprehensive learning PSO;
(15) OLPSO-G [46]: orthogonal learning PSO.
In Table 8, MuABC is compared to ALEP, FEP, CEP/best, OGA/Q, and LEA. The results of these algorithms are come from their
original references. NA means the results are not provided in the original references. For clarity, we mark the results of the best
algorithms in boldface. Table 8 shows that MuABC offers good performance and does best on 7 of 9 cases with the smallest FES.
While, OGA/Q outperforms MuABC on the functions Sphere and Schwefel 2.22.
Further experimental results are presented in Tables 9 and 10. MuABC is compared with DE variants in Table 9. The results of
the compared DEs are reported in [40–42]. In addition, MuABC is compared with PSO variants in Table 10 whose results are come
from [46]. It is quite clear that MuABC performs best on almost all the cases and has obvious superiority over other contenders.

4.4. Application to sound frequency modulator synthesis problem

Frequency-modulated (FM) sound synthesis plays an important role in several modern music systems. Das et al. [47] intro-
duced a system that can automatically generate sounds similar to the target sounds. In this system, the formulas of the estimated
sound wave and the target sound wave are given by the following equations:

y(t ) = a1 sin (ω1 · t · θ − a2 · sin (ω2 · t · θ + a3 · sin (ω3 · t · θ ))), (4.1)

y0 (t ) = 1.0 sin (5.0 · t · θ − 1.5 · sin (4.8 · t · θ + 2.0 · sin (4.9 · t · θ ))), (4.2)

where θ = 2π /100.
The goal is to minimize the sum of squared errors between the estimated sound and the target sound, as shown by Eq. (4.3).
This problem involves a multimodal function which has strong epitasis (interrelation among the variables), with the optimum
value 0.0.
Table 5

W.-f. Gao et al. / Applied Mathematics and Computation 271 (2015) 269–287
Convergence speed and successful rate comparisons of ABCs on 15-dimensional functions f1 − f20 , and 50-dimensional functions f21 and f22 .

Fun f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11

ABC SR 100 100 100 100 0 100 100 100 0 100 50


AVEN 3.33e+04 4.59e+04 3.00e+04 1.00e+04 - 1.98e+04 1.19e+04 2.14e+04 - 2.18e+04 4.79e+04
GABC SR 100 92 100 100 100 100 100 100 4 96 100
AVEN 2.10e+04 2.80e+04 1.91e+04 6.70e+03 3.36e+04 1.19e+04 7.91e+03 1.39e+04 3.89e+04 1.97e+04 3.12e+04
ABCbest SR 100 100 100 100 100 100 100 100 100 70 100
AVEN 1.62e+04 2.05e+04 1.44e+04 4.87e+03 2.42e+04 3.09e+03 6.19e+03 1.11e+04 4.93e+03 1.87e+04 1.87e+04
CABC SR 100 100 100 100 100 100 100 100 100 100 100
AVEN 1.56e+04 2.07e+04 1.42e+04 5.66e+03 2.41e+04 1.16e+04 6.02e+03 1.05e+04 3.56e+04 1.42e+04 1.80e+04
MuABC SR 100 100 100 100 100 100 100 100 100 100 100
AVEN 1.11e+04 1.41e+04 1.02e+04 4.14e+03 1.70e+04 2.71e+03 4.10e+03 7.34e+03 3.81e+03 1.03e+04 1.42e+04
Fun f12 f13 f14 f15 f16 f17 f18 f19 f20 f21 f22
ABC SR 0 52 36 0 100 100 0 100 0 100 0
AVEN - 4.59e+04 4.83e+04 - 3.00e+04 3.28e+04 - 3.96e+04 - 4.20e+04 -
GABC SR 100 92 100 100 100 100 56 100 100 100 0
AVEN 3.40e+04 3.48e+04 3.01e+04 3.71e+04 1.84e+04 2.03e+04 4.69e+04 2.38e+04 4.12e+04 2.43e+04 -
ABCbest SR 100 68 100 100 100 100 100 100 100 100 0
AVEN 1.91e+04 3.14e+04 2.20e+04 2.61e+04 1.40e+04 1.51e+04 2.52e+04 1.66e+04 2.86e+04 1.59e+04 -
CABC SR 100 96 100 100 100 100 100 100 100 100 82
AVEN 1.90e+04 2.65e+04 1.81e+04 2.58e+04 1.33e+04 1.47e+04 2.30e+04 1.69e+04 2.80e+04 1.43e+04 3.59e+04
MuABC SR 100 100 100 100 100 100 100 100 100 100 100
AVEN 1.49e+04 2.24e+04 1.44e+04 1.83e+04 9.54e+03 1.07e+04 1.85e+04 1.08e+04 1.91e+04 1.18e+04 3.23e+04

279
280
Table 6

W.-f. Gao et al. / Applied Mathematics and Computation 271 (2015) 269–287
Convergence speed and successful rate comparisons of ABCs on 30-dimensional functions f1 − f20 , and 100-dimensional functions f21 and f22 .

Fun f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11

ABC SR 100 100 100 100 0 32 100 100 0 100 0


AVEN 8.52e+04 9.54e+04 6.63e+04 2.05e+04 - 9.29e+04 2.83e+04 4.68e+04 - 5.38e+04 -
GABC SR 100 100 100 100 100 100 100 94 0 94 100
AVEN 4.71e+04 6.09e+04 4.29e+04 1.25e+04 7.31e+04 6.15e+04 1.82e+04 3.18e+04 - 5.38e+04 7.07e+04
ABCbest SR 100 100 100 100 100 100 100 100 100 60 100
AVEN 3.53e+04 4.61e+04 3.37e+04 9.80e+03 5.46e+04 1.67e+04 1.46e+04 2.49e+04 3.30e+04 5.51e+04 4.34e+04
CABC SR 100 100 100 100 100 100 100 100 100 100 100
AVEN 3.30e+04 4.39e+04 3.09e+04 1.22e+04 5.07e+04 5.32e+04 1.34e+04 2.27e+04 3.63e+04 3.52e+04 4.15e+04
MuABC SR 100 100 100 100 100 100 100 100 100 100 100
AVEN 2.38e+04 2.93e+04 2.17e+04 5.62e+03 3.44e+04 1.39e+04 8.90e+03 1.57e+04 1.77e+04 2.27e+04 3.92e+04
Fun f12 f13 f14 f15 f16 f17 f18 f19 f20 f21 f22
ABC SR 0 92 6 0 100 100 0 100 0 100 0
AVEN - 8.43e+04 9.74e+04 - 6.16e+04 7.10e+04 - 7.93e+04 - 8.63e+04 -
GABC SR 100 94 100 100 100 100 6 100 100 100 0
AVEN 7.75e+04 5.71e+04 7.17e+04 7.88e+04 3.96e+04 4.40e+04 9.89e+04 4.72e+04 8.79e+04 5.98e+04 -
ABCbest SR 100 100 100 100 100 100 100 100 100 100 0
AVEN 4.28e+04 4.26e+04 4.05e+04 5.75e+04 3.05e+04 3.36e+04 5.61e+04 2.46e+04 6.20e+04 4.50e+04 -
CABC SR 100 96 100 100 100 100 100 100 100 100 82
AVEN 4.24e+04 4.45e+04 3.86e+04 5.32e+04 2.78e+04 3.11e+04 5.03e+04 3.51e+04 5.83e+04 3.09e+04 9.45e+04
MuABC SR 100 100 100 100 100 100 100 100 100 100 100
AVEN 3.25e+04 3.15e+04 2.87e+04 3.72e+04 1.91e+04 2.21e+04 3.91e+04 2.26e+04 3.93e+04 2.35e+04 5.43e+04
Table 7

W.-f. Gao et al. / Applied Mathematics and Computation 271 (2015) 269–287
Convergence speed and successful rate comparisons of ABCs on 60-dimensional functions f1 − f20 , and 200-dimensional functions f21 and f22 .

Fun f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11

ABC SR 100 48 100 100 0 0 100 100 0 100 0


AVEN 1.50e+05 1.97e+05 1.43e+05 4.27e+04 - - 8.25e+04 1.03e+05 - 1.33e+05 -
GABC SR 100 100 100 100 100 0 100 86 0 62 100
AVEN 1.00e+05 1.30e+05 9.66e+04 2.64e+05 1.56e+05 - 4.22e+04 6.94e+04 - 1.29e+05 1.57e+05
ABCbest SR 100 100 100 100 100 100 100 100 0 22 100
AVEN 7.87e+04 1.01e+05 7.62e+04 2.11e+04 1.18e+05 1.26e+05 3.25e+04 5.49e+04 - 1.71e+05 9.13e+05
CABC SR 100 100 100 100 100 62 100 100 0 100 100
AVEN 6.96e+04 9.06e+04 6.66e+04 2.46e+04 1.06e+05 1.52e+05 2.83e+04 4.83e+04 - 8.70e+04 8.20e+04
MuABC SR 100 100 100 100 100 100 100 100 0 100 100
AVEN 4.92e+04 6.07e+04 4.65e+04 1.19e+04 7.21e+05 1.12e+05 1.93e+04 3.37e+04 - 5.47e+04 7.65e+04
Fun f12 f13 f14 f15 f16 f17 f18 f19 f20 f21 f22
ABC SR 0 100 0 0 100 100 0 100 0 48 0
AVEN - 1.66e+05 - - 1.23e+05 1.47e+05 - 1.61e+05 - 1.95e+05 -
GABC SR 100 84 100 100 100 100 0 100 100 100 0
AVEN 1.73e+05 1.09e+05 1.61e+05 1.64e+05 8.27e+04 9.49e+04 - 9.80e+04 1.85e+05 1.21e+05 -
ABCbest SR 100 100 100 100 100 100 100 100 100 100 0
AVEN 9.47e+04 8.99e+04 1.23e+05 1.21e+05 6.46e+04 7.46e+04 1.26e+05 7.47e+04 1.09e+05 7.26e+04 -
CABC SR 100 86 100 100 100 100 100 100 100 100 0
AVEN 8.56e+04 8.64e+04 8.25e+04 1.10e+05 5.62e+04 6.50e+04 1.03e+05 7.42e+04 1.20e+05 6.38e+04 -
MuABC SR 100 100 100 100 100 100 100 100 100 100 100
AVEN 7.24e+04 6.29e+04 6.26e+04 7.56e+04 3.93e+04 4.39e+04 7.94e+04 4.46e+04 8.14e+04 4.69e+04 1.16e+05

281
282 W.-f. Gao et al. / Applied Mathematics and Computation 271 (2015) 269–287

SumSquare Function with D=15 Schwefel 2.21 Function with D=15


10 10
ABC

10
10
MuABC

10
10
Fitness

Fitness
10

10
10

10
10

10 10
0 1 2 3 4 5 0 1 2 3 4 5
FES FES
x 10 x 10
NCRastrigin Function with D=15 Penalized1 Function with D=15
10 10

10 10

10
Fitness

Fitness
10

10
10

10
10

10
0 1 2 3 4 5 0 1 2 3 4 5
FES FES
x 10 x 10
Weierstrass Function with D=15 Sphere Function with D=30
10 10

10

10
10

10
Fitness

Fitness

10
10

10
10

10

10 10
0 1 2 3 4 5 0 2 4 6 8 10
FES FES
x 10 x 10
SumPower Function with D=30 Step Function with D=30
10 10

10
10

10
Fitness

Fitness

10

10

10
10

10 10
0 2 4 6 8 10 0 0.5 1 1.5 2 2.5 3
FES FES
x 10 x 10

Fig. 3. Convergence graph of different ABCs on the 8 test functions with D = 15 or 30.
W.-f. Gao et al. / Applied Mathematics and Computation 271 (2015) 269–287 283

Rosebrock Function with D=30 Rastrigin Function with D=30


10 10

10
10

10
Fitness

Fitness
10

10

10
10

10 10
0 1 2 3 4 5 6 7 8 9 10 0 2 4 6 8 10
FES FES
x 10 x 10
Alpine Function with D=30 Elliptic Function with D=60
10 10
ABC
GABC
10 ABCbest
CABC 10
10 MuABC

10
10
Fitness

10 Fitness 10

10
10
10
10
10

10 10
0 2 4 6 8 10 0 0.5 1 1.5 2
FES FES
x 10 x 10
Schwefel 2.22 Function with D=60 Ackley Function with D=60
10 10

10
10
10
Fitness

Fitness

10 10

10
10
10

10 10
0 0.5 1 1.5 2 0 0.5 1 1.5 2
FES FES
x 10 x 10
Penalized2 Function with D=60 Levy Function with D=60
10 10

10
10
10

10
10
Fitness

Fitness

10

10
10

10
10
10

10 10
0 0.5 1 1.5 2 0 0.5 1 1.5 2
FES FES
x 10 x 10

Fig. 4. Convergence graph of different ABCs on 8 test functions with D = 30 or 60.


284 W.-f. Gao et al. / Applied Mathematics and Computation 271 (2015) 269–287

Table 8
Comparisons between MuABC and several variant evolutionary algorithms on optimizing 30- or 100-dimensional functions.

Algorithms Sphere Schwefel 2.22 Schwefel 2.26

Mean.FE Mean SD Mean.FE Mean SD Mean.FE Mean SD

ALEP 150,000 6.3e-04 7.6e-05 NA NA NA 150,000 1.1e+03 5.8e+01


FEP 150,000 5.7e-04 1.3e-04 200,000 8.1e-03 7.7e-04 900,000 1.4e+01 5.2e+01
CEP/best 250,000 3.9e-07 NA 250,000 1.9e-03 NA NA NA NA
OGA/Q 112,559 0 0 112,612 0 0 302,116 3.0e-02 6.4e-04
LEA 110,654 4.7e-16 6.2e-17 110,031 4.2e-19 4.2e-19 302,116 3.0e-02 6.4e-04
MuABC 50,000 2.35e-23 1.92e-25 100,000 1.76e-27 8.16e-28 100,000 1.21e-13 1.05e-13
Rastrigin Griewank Penalized 1

ALEP 150,000 5.8e-00 2.1e-00 150,000 2.4e-02 2.8e-02 150,000 6.0e-06 1.0e-06
FEP 500,000 4.6e-02 1.2e-02 200,000 1.6e-02 2.2e-02 150,000 9.2e-06 3.6e-06
CEP/best 250,000 4.7e-00 NA 250,000 2.7e-07 NA NA NA NA
OGA/Q 224,710 0 0 134,000 0 0 134,556 6.0e-06 1.1e-06
LEA 223,803 2.1e-18 3.3e-18 140,498 6.1e-16 2.5e-17 132,642 2.4e-06 2.2e-06
MuABC 100,000 0 0 100,000 0 0 50,000 5.14e–25 6.08e-25
Penalized 2 Himmelblau Michalewicz

ALEP 150,000 9.8e-05 1.2e-05 NA NA NA NA NA NA


FEP 150,000 1.6e-04 7.3e-05 NA NA NA NA NA NA
EDA/L 114,570 3.4e-21 NA 153,116 −78.3107 NA 168,885 −94.3757 NA
OGA/Q 134,143 1.8e-04 2.6e-05 245,930 −78.3000 6.2e-03 302,773 −92.83 2.6e-02
LEA 130,213 1.7e-04 1.2e-04 243,895 −78.3100 6.1e-03 289,863 −93.01 2.3e-02
MuABC 50,000 4.86e-24 3.93e-24 150,000 −78.3323 3.75e-14 150,000 −99.2071 4.69e-02

Table 9
Comparisons between MuABC and DEs on optimizing 30-dimensional functions.

Fun Max.FEs DE jDE JADE SaDE MuABC

Sphere 150,000 9.8e-14 (8.4e-14) 1.46e-28 (1.78e-28) 2.69e-56 (1.41e-55) 3.28e-20 (3.63e-20) 1.81e-76 (2.37e-76)
Schwefel 2.22 200,000 1.6e-09 (1.1e-09) 9.02e-24 (6.01e-24) 3.18e-25 (2.05e-24) 3.51e-25 (2.74e-25) 2.48e-55 (2.82e-55)
Rosenbrock 300,000 2.1e+00 ( 1.5e+00) 1.3e+01 (1.4e+01) 3.2e-01 (1.1e+00) 2.1e+01 (7.8e+00) 1.45e-04 (5.60e-06)
Step 10,000 4.7e+03 (1.1e+03) 6.13e+02 (1.72e+02) 5.62e+00 (1.87e+00) 5.07e+01 (1.34e+01) 0 (0)
Schwefel 2.26 100,000 5.9e+03 (1.1e+03) 1.70e-10 (1.71e-10) 2.62e-04 (3.59e-04) 1.13e-08 (1.08e-08) 1.21e-13 (1.05e-13)
Rastrigin 100,000 1.8e+02 (1.3e+01) 3.32e-04 (6.39e-04) 1.33e-01 (9.74e-02) 2.43e+00 (1.60e+00) 0 (0)
Ackley 50,000 1.1e-01 (3.9e-02) 2.37e-04 (7.10e-05) 3.35e-09 (2.84e-09) 3.81e-06 8.26e-07 3.35e-12 (6.02e-13)
Griewank 50,000 2.0e-01 (1.1e-01) 7.29e-06 (1.05e-05) 1.57e-08 (1.09e-07) 2.52e-09 (1.24e-08) 8.09e-10 (1.78e-10)
Penalized 1 50,000 1.2e-02 (1.0e-02) 7.03e-08 (5.74e-08) 1.67e-15 (1.02e-14) 8.25e-12 (5.12e-12) 5.14e-25 (6.08e-25)
Penalized 2 50,000 7.5e-02 (3.8e-02) 1.80e-05 (1.42e-05) 1.87e-10 (1.09e-09) 1.93e-09 (1.53e-09) 4.86e-24 (3.93e-24)
Alpine 300,000 2.3e-04 (1.7e-04) 6.08e-10 (8.36e-10) 2.78e-05 (8.43e-06) 2.94e-06 (3.47e-06) 9.12e-93 (1.25e-92)


100
f (X ) = (y(t ) − y0 (t ))2 . (4.3)
t=0

The parameters of X = {a1 , ω1 , a2 , ω2 , a3 , ω3 } are defined in the range [−6.4, +6.35]. In Table 11, we report the best, worst,
median, mean and standard deviation values obtained by the five algorithms through 30 independent runs. Table 11 shows that
MuABC is significantly better than the other algorithms.

5. Conclusion

To the problem of slow convergence in ABC, we first propose a novel search mechanism by introducing the Gaus-
sian distribution which can be applied to the original search equation, as well as the modified variants to improve the
search ability. Further, three search strategies are selected to form the strategy candidate pool which is systematically ex-
ploited by the adaptive selection mechanism. Thus, a novel approach, called MuABC, is designed to improve the perfor-
mance of ABC. Finally, the comparison results on 22 test functions show the effectiveness of MuABC. MuABC can signifi-
cantly improve the performance of ABC, offering higher solution accuracy, faster convergence speed, and stronger algorithm
reliability.
It may be worthy to apply MuABC to more complex practical optimization problems, such as the design of wireless telecom-
munications networks, image processing, data classification and so on. As a potential future work, it may be interesting to extend
MuABC to handle multi-objective optimization problems and combinatorial optimization problems.
W.-f. Gao et al. / Applied Mathematics and Computation 271 (2015) 269–287
Table 10
Comparisons between MuABC and PSOs on optimizing 30-dimensional functions.

Fun PSO FIPS HPSO-TVAC CLPSO OLPSO-G MuABC

Sphere 3.34e–14 (5.39e–14) 2.42e–13 (1.73e-13) 2.83e–33 (3.19e–33) 1.58e–12 (7.70e–13) 4.12e–54 (6.34e–54) 8.80e–105 (6.56e–105)
Schwefel 2.22 1.70e-10 (1.39e-10) 2.76e-08 (9.04e-09) 9.03e-20 (9.58e-20) 2.51e–08 (5.84e–09) 9.85e–30 (1.01e–29) 2.48e–55 (2.82e–55)
Rosenbrock 2.80e+01 ( 2.17e+01) 2.51e+01 (5.10e-01) 2.39e+01 (2.65e+01) 1.13e+01 (9.85e–00) 2.15e+01 (2.99e+01) 4.93e–03 (4.71e–03)
Step 0 (0) 0 (0) 0 (0) 0 (0) 0 (0) 0 (0)
Schwefel 2.26 3.16e+03 (4.06e+02) 9.93e+02 (5.09e+02) 1.59e+03 (3.26e+02) 3.82e–04 (1.28e–05) 3.84e+02 (2.17e+02) 0 (0)
Rastrigin 3.57e+01 (6.89e–00 ) 6.51e+01 (1.33e+01) 9.43e–00 (3.48e–00) 9.09e–05 (1.25e–04) 1.07e–00 (9.92e–01) 0 (0)
NCRastrigin 4.36e+01 (1.12e+01) 7.01e+01 (1.47e+01) 1.03e+01 (8.24e–00) 1.54e–00 (2.75e–00) 2.18e–00 (6.31e–01) 0 (0)
Ackley 8.20e–08 (6.73e–08 ) 2.33e–07 (7.19e–08) 7.29e–14 (3.00e–14) 3.66e–07 (7.57e–08) 7.98e–15 (2.03e–15) 3.10e–15 (2.02e–15)
Griewank 1.53e–03 (4.32e–03) 9.01e–12 (1.84e–11) 9.75e–03 (8.33e–03) 9.02e–09 (8.57e–09) 4.83e–03 (8.63e–03) 0 (0)
Penalized 1 8.10e–16 (1.07e–15 ) 1.96e–15 (1.11e–15) 2.71e–29 (1.88e–28) 6.45e–14 (3.70e–14) 1.59e–32 (1.03e–33) 1.57e–32 (0)
Penalized 2 3.26e–13 ( 3.70e–13 ) 2.70e–14 (1.57e–14) 2.79e–28 (2.18e–28) 1.25e–12 (9.45e–12) 4.39e–04 (2.20e–03) 1.35e–32 (0)

285
286 W.-f. Gao et al. / Applied Mathematics and Computation 271 (2015) 269–287

Table 11
Best, worst, median, mean and standard deviation values obtained by five algo-
rithms through 30 independent runs on frequency modulator synthesis problem..

Best Worst Median Mean SD

ABC 2.98e+01 3.14e+01 3.10e+01 3.08e+01+ 5.94e–01


GABC 9.05e–03 7.32e–01 2.03e–02 6.26e–02+ 6.03e–02
ABCbest 2.64e–03 3.70e–02 9.63e–03 5.38e–03+ 4.38e–03
CABC 1.39e–04 8.97e–04 4.72e–04 5.13e–04+ 4.28e–04
MuABC 6.27e–08 1.82e–06 7.43e–07 5.91e–07 3.72e–07

Acknowledgments

This work was supported in part by the National Nature Science Foundation of China under grants 61402534, 61373174,
61201455, 61301243, and 11201484, by the Shandong Provincial Natural Science Foundation, China under grant ZR2014FQ002
and by the Fundamental Research Funds for the Central Universities under grants 14CX02160A and 15CX02057A.

References

[1] K.Y. Tam, Genetic algorithms, function optimization, and facility layout design, Eur. J. Oper. Res. 63 (1995) 322–346.
[2] R. Storn, K. Price, Differential evolution - a simple and efficient heuristic for global optimization over continuous spaces, J. Glob. Optim. 11 (1997) 341–359.
[3] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceedings of the IEEE Conference on Neural Networks, 1995, pp. 1942–1948.
[4] D. Simon, Biogeography-based optimization, IEEE Trans. Evolut. Comput. 12 (2008) 702–713.
[5] K. Socha, M. Dorigo, Ant colony optimization for continuous domains, Eur. J. Oper. Res. 185 (2008) 1155–1173.
[6] D. Karaboga, An idea based on honey bee swarm for numerical optimization, Technical Report-TR06, Erciyes University, Kayseri, Turkey, 2005.
[7] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: artificial bee colony (abc) algorithm, J. Glob. Optim. 39
(2007) 459–471.
[8] D. Karaboga, B. Basturk, On the performance of artificial bee colony (abc) algorithm, Appl. Soft Comput. 8 (2008) 687–697.
[9] D. Karaboga, B. Basturk, A comparative study of artificial bee colony algorithm, Appl. Math. Comput. 214 (2009) 108–132.
[10] D. Karaboga, Artificial bee colony algorithm, Scholarpedia 5 (2010) 6915.
[11] W.Y. Szeto, Y.Z. Wu, S.C. Ho, An artificial bee colony algorithm for the capacitated vehicle routing problem, Eur. J. Oper. Res. 215 (2011) 126–135.
[12] Q.K. Pan, M.F. Tasgetiren, P.N. Suganthan, T.J. Chua, A discrete artificial bee colony algorithm for the lot-streaming flow shop scheduling problem, Inf. Sci.
181 (2011) 2455–2468.
[13] W.F. Gao, S.Y. Liu, F. Jiang, An improved artificial bee colony algorithm for directing orbits of chaotic systems, Appl. Math. Comput. 218 (2011) 3868–3879.
[14] B. Basturk, D. Karaboga, A modified artificial bee colony algorithm for real-parameter optimization, Inf. Sci. 192 (2012) 120–142.
[15] F. Kang, J.J. Li, Q. Xu, Structural inverse analysis by hybrid simplex artificial bee colony algorithms, Comput. Struct. 87 (2009) 861–870.
[16] F. Kang, J.J. Li, Z.Y. Ma, Rosenbrock artificial bee colony algorithm for accurate global optimization of numerical functions, Inf. Sci. 181 (2011) 3508–3531.
[17] B. Alatas, Chaotic bee colony algorithms for global numerical optimization, Expert Syst. Appl. 37 (2010) 5682–5687.
[18] H. Zhao, Z. Pei, J. Jiang, R. Guan, C. Wang, X. Shi, A hybrid swarm intelligent method based on genetic algorithm and artificial bee colony, Advances in Swarm
Intelligence (Lecture Notes in Computer Science 6145), Springer, Berlin, Germany, 2010, pp. 558–565.
[19] H.B. Duan, C.F. Xu, Z.H. Xing, A hybrid artificial bee colony optimization and quantum evolutionary algorithm for continuous optimization problems, Int. J.
Neural Syst. 20 (2010) 39–50.
[20] X. Shi, Y. Li, H. Li, R. Guan, L. Wang, Y. Liang, An integrated algorithm based on artificial bee colony and particle swarm optimization, in: Proceedings of the
IEEE International Joint Conference on Neural Networks, 2010, pp. 2586–2590.
[21] C. Sumpavakup, S. Chusanapiputt, I. Srikun, A hybrid cultural-based bee colony algorithm for solving the optimal power flow, in: Proceedings of the IEEE
International Midwest Symposium on Circuits and Systems, 2011, pp. 1–4.
[22] W.L. Xiang, M.Q. An, An efficient and robust artificial bee colony algorithm for numerical optimization, Comput. Oper. Res. 40 (2013) 1256–1265.
[23] W.F. Gao, S.Y. Liu, L.L. Huang, A novel artificial bee colony algorithm with Powell’s method, Appl. Soft Comput. 13 (2013a) 3763–3775.
[24] W.F. Gao, S.Y. Liu, L.L. Huang, A global best artificial bee colony algorithm for global optimization, IEEE Trans. Cybern. 43 (2013b) 1011–1024.
[25] G.P. Zhu, S. Kwong, Gbest-guided artificial bee colony algorithm for numerical function optimization, Appl. Math. Comput. 217 (2010) 3166–3173.
[26] A. Banharnsakun, T. Achalakul, B. Sirinaovakul, The best-so-far selection in artificial bee colony algorithm, Appl. Soft Comput. 11 (2011) 2888–2901.
[27] G.Q. Li, P.F. Niu, X.J. Xiao, Development and investigation of efficient artificial bee colony algorithm for numerical function optimization, Appl. Soft Comput.
12 (2012) 320–332.
[28] P.W. Tsai, J.S. Pan, B.Y. Liao, S.C. Chu, Enhanced artificial bee colony optimization, Int. J. Innovative Comput. Inf. Control 5 (2009) 1–12.
[29] L.S. Coelho, P. Alotto, Gaussian artificial bee colony algorithm approach applied to Loney’s solenoid benchmark problem, IEEE Trans. Magn. 47 (2011) 1329.
[30] W.F. Gao, S.Y. Liu, L.L. Huang, A global best artificial bee colony algorithm for global optimization, J. Comput. Appl. Math. 236 (2012) 2741–2753.
[31] A.K. Qin, V.L. Huang, P.N. Suganthan, Differential evolution algorithm with strategy adaptation for global numerical optimization, IEEE Trans. Evolut. Comput.
13 (2009) 398–417.
[32] Y.W. Leung, Y. Wang, An orthogonal genetic algorithm with quantization for global numerical optimization, IEEE Trans. Evolut. Comput. 5 (2001) 41–53.
[33] Y.W. Shang, Y.H. Qiu, A note on the extended Rosenbrock function, Evolut. Comput. 14 (2006) 119–126.
[34] Y.P. Wang, C.Y. Dang, An evolutionary algorithm for global optimization based on level-set evolution and latin squares, IEEE Trans. Evolut. Comput. 11 (2007)
579–595.
[35] C.Y. Lee, X. Yao, Evolutionary programming using mutations based on the levy probability distribution, IEEE Trans. Evolut. Comput. 8 (2004) 1–13.
[36] X. Yao, Y. Liu, G.M. Lin, Evolutionary programming made faster, IEEE Trans. Evolut. Comput. 3 (1999) 82–102.
[37] Q. Zhang, J. Sun, E. Tsang, J. Ford, Hybrid estimation of distribution algorithm for global optimization, Comput. Eng. 21 (2004) 91–107.
[38] K. Chellapilla, Combining mutation operators in evolutionary programming, IEEE Trans. Evolut. Comput. 2 (1998) 91–96.
[39] J. Brest, S. Greiner, B. Boskovic, M. Mernik, V. Zumer, Selfadapting control parameters in differential evolution: a comparative study on numerical benchmark
problems, IEEE Trans. Evolut. Comput. 10 (2006) 646–657.
[40] J. Zhang, A.C. Sanderson, Jade: adaptive differential evolution with optional external archive, IEEE Trans. Evolut. Comput. 5 (2009) 945–958.
[41] W.Y. Gong, Z.H. Cai, C.X. Ling, H. Li, Enhanced differential evolution with adaptive strategies for numerical optimization, IEEE Trans. Syst. Man Cybern. - Part
B 41 (2011) 397–413.
[42] Y. Wang, Z. Cai, Q.F. Zhang, Differential evolution with composite trial vector generation strategies and control parameters, IEEE Trans. Evolut. Comput. 15
(2011) 55–66.
[43] R. Mendes, J. Kennedy, J. Neves, The fully informed particle swarm: simpler, maybe better, IEEE Trans. Evolut. Comput. 8 (2004) 204–210.
W.-f. Gao et al. / Applied Mathematics and Computation 271 (2015) 269–287 287

[44] J.J. Liang, A.K. Qin, P.N. Suganthan, S. Baskar, Comprehensive learning particle swarm optimizer for global optimization of multimodal functions, IEEE Trans.
Evolut. Comput. 10 (2006) 281–295.
[45] A. Ratnaweera, S. Halgamuge, H. Watson, Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients, IEEE Trans.
Evolut. Comput. 8 (2004) 240–255.
[46] Z.H. Zhan, J. Zhang, Y. Li, Y.H. Shi, Orthogonal learning particle swarm optimization, IEEE Trans. Evolut. Comput. 15 (2011) 832–847.
[47] S. Das, A. Abraham, U.K. Chakraborty, A. Konar, Differential evolution using a neighborhood based mutation operator, IEEE Trans. Evolut. Comput. 13 (2009)
526–553.

You might also like