Professional Documents
Culture Documents
a r t i c l e i n f o a b s t r a c t
Article history: Artificial bee colony (ABC) algorithm inspired by the foraging behaviour of the honey bees is one of
Received 5 March 2013 the most popular swarm intelligence based optimization techniques. Quick artificial bee colony (qABC)
Received in revised form 28 March 2014 is a new version of ABC algorithm which models the behaviour of onlooker bees more accurately and
Accepted 22 June 2014
improves the performance of standard ABC in terms of local search ability. In this study, the qABC method
Available online 28 June 2014
is described and its performance is analysed depending on the neighbourhood radius, on a set of bench-
mark problems. And also some analyses about the effect of the parameter limit and colony size on
Keywords:
qABC optimization are carried out. Moreover, the performance of qABC is compared with the state of
Optimization
Swarm intelligence art algorithms’ performances.
Artificial bee colony © 2014 Elsevier B.V. All rights reserved.
Quick artificial bee colony
1. Introduction gbest-guided ABC [9]. They placed the global best solution into the
solution search equation. For multiple sequence alignment prob-
Optimization is an issue of finding the best solutions via an lem, Xu and Lei introduced Metropolis acceptance criteria into the
objective in the search space of a problem. In order to solve the searching process of ABC to prevent the algorithm from sliding into
real world optimization problems – especially NP hard problems local optimum [10]. They called the improved version of ABC as
– evolutionary computation (EC) based optimization methods that ABC SA. Tuba et al. improved GABC algorithm that integrates ABC
consist of evolutionary algorithms and swarm intelligence based optimization with self-adaptive guidance [11]. Li et al. used inertia
algorithms are frequently preferred. A swarm intelligence based weight and acceleration coefficients to get a better performance
algorithm models an intelligent behaviour/behaviours of social on the searching process of ABC algorithm [12]. A new scheduling
creatures that can be characterised as an intelligent swarm and method based on best-so-far ABC for solving the JSSP was proposed
this model can be used for searching the optimal solutions of vari- by Banharnsakun et al. [13]. In this method solution direction is
ous engineering problems. Artificial bee colony (ABC) algorithm is a biased toward the best-so-far solution rather than a neighbour one.
swarm intelligence based optimization technique that models the Bi and Wang presented a modification on the scouts’ behaviour
foraging behaviour of the honey bees in the nature [1]. This algo- in ABC algorithm [14]. They used a mutation strategy based on
rithm was first introduced to literature by Karaboga in 2005 [2]. opposition-based learning and called the new method as fast
Until today, ABC algorithm has been used in many applications. In mutation ABC. Inspired by differential evolution (DE) algorithm,
[3], a good survey study on ABC algorithm can be found. Gao and Liu presented a new solution search equation for standard
Although the standard ABC optimization generally produces ABC [15]. The new equation aims to improve the exploitation
successful results in many application studies, for getting a better capability and it is based on that the bee searches only around
performance from ABC algorithm, some researchers attempted to the best solution of the previous iteration. Mezura-Montes and
implement ABC in parallel [4–8] and some integrated the different Cetina-Dominguez presented a modified ABC to solve constrained
concepts of the other EC based methods with ABC algorithm [9–20]. numerical optimization problems [16]. They modified ABC algo-
Zhu and Kwong are influenced by particle swarm optimization rithm with the selection mechanism, the scout bee operator and
(PSO) and they proposed an improved ABC algorithm named the equality and boundary constraints. For constrained optimiza-
tion problems, Bacanin and Tuba introduced some modifications
based on genetic algorithm operators to the ABC algorithm [17]. In
∗ Corresponding author. Tel.: +90 3522076666 32554. order to improve the exploitation capability of ABC algorithm, Gao
E-mail addresses: karaboga@erciyes.edu.tr (D. Karaboga), et al. presented an improved ABC [18]. In this improved version,
bgorkemli@erciyes.edu.tr (B. Gorkemli). they used a modified search strategy to generate new food source.
http://dx.doi.org/10.1016/j.asoc.2014.06.035
1568-4946/© 2014 Elsevier B.V. All rights reserved.
228 D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238
They used opposition based learning method and chaotic maps all of the bees in the hive work as scouts and they all start with
to produce the initial population for a better global convergence. random solutions or food sources. In further cycles, when the food
For a better exploitation, Gao et al. also presented a modified ABC sources are abandoned, the employed bee related to the abandoned
algorithm inspiring by DE [19]. With this modification, each bee resource becomes a scout. In the algorithm, a parameter, limit is
searches only around the best solution of the previous cycle to used to control the abandonment problem of the food sources. For
improve the exploitation. And they also used chaotic systems and each solution, the trial number of improvement is taken, in every
opposition based learning method while producing the initial pop- cycle the solution which has the maximum trial number is deter-
ulation and scout bees to improve the global convergence. Liu et al. mined and its trial number is compared with the parameter limit.
introduced an improved ABC algorithm with mutual learning [20]. If it achieve the limit value, this solution is leaved and searching
This approach adjusts the produced candidate food source using process continues with a randomly produced new solution.
some individuals which are selected by a mutual learning factor. In ABC algorithm, a food source position is defined as a possible
In order to get more powerful optimization technique, some solution and the nectar amount or quality of the food source corre-
researchers combined ABC algorithm with other EC based meth- sponds to the fitness of the related solution in optimization process.
ods or traditional algorithms. Kang et al. proposed a hybrid simplex Since, each employed bee is associated with one and only one food
ABC [21]. In [21], they combined ABC with Nelder–Mead sim- source, the number of employed bees is equal to the number of food
plex method and in another study, they used this new method for sources.
inverse analysis problems [22]. Marinakis et al. proposed a hybrid The general algorithmic structure of the ABC optimization is
algorithm based on ABC optimization and greedy randomised adap- given below:
tive search procedure in order to cluster n objects into k clusters Initialization phase
[23]. Another different hybrid ABC was described by Xiao and Chen REPEAT
by using artificial immune network algorithm [24]. They applied Employed bees phase
the hybrid algorithm to multi-mode resource constrained multi- Onlooker bees phase
project scheduling problem. Bin and Qian introduced a differential Scout bees phase
ABC algorithm for global numerical optimization [25]. Sharma and Memorize the best solution achieved so far
Pant used DE operators with standard ABC algorithm [26]. Kang UNTIL (cycle = maximum cycle number or a maximum CPU time)
et al. demonstrated how the standard ABC can be improved by
incorporating a hybridization strategy [27]. They suggested a novel 2.1. Initialization phase
hybrid optimization technique composed of Hooke Jeeves pattern
search and ABC algorithm. Hsieh et al. proposed a new hybrid algo- In the initialization phase, food sources are randomly initialised
rithm of PSO and ABC optimization [28]. Abraham et al. made a with Eq. (1) in a given range.
hybridization of ABC algorithm and DE strategy, and called this
novel approach as hybrid differential ABC algorithm [29]. xm,i = li + rand(0, 1) ∗ (ui − li ) (1)
In this work, instead of presenting a new hybrid ABC algorithm
where xm,i is the value of the i. dimension of the m. solution. li repre-
or integrating an operator of an existing algorithm into ABC, our
sents the lower and ui represents the upper bound of the parameter
aim is to model the behaviour of foragers in ABC more accurately
xm,i .
by introducing a new definition for onlooker bees. By using the new
Then, obtained solutions are evaluated and their objective func-
definition proposed for onlooker bees, ABC achieves a better per-
tion values are calculated.
formance than standard ABC in terms of local search ability. So, we
called our new algorithm as quick ABC (qABC) [30]. In this work, the
2.2. Employed bees phase
qABC algorithm is described in more detailed way and then its per-
formance is tested on a set of test problems larger than that in [30]
This phase of the algorithm describes the employed bees
and the effects of control parameters neighbourhood radius, limit
behaviour for finding a better food source within the neighbour-
and colony size on its performance are investigated. Also the perfor-
hood of the food source (xm ) in their minds. They leave the hive with
mance of qABC is compared with the state of art algorithms. The rest
the information of the food source position, but when they arrive
of the paper is organised as follows: Section 2 describes the stan-
the target point, they are affected from the traces of the other bees
dard ABC and the novel strategy (qABC) is presented in Section 3.
on the flowers and they find a candidate food source. In the ABC
The computational study and simulation results are demonstrated
optimization they determine this neighbour food source by using
in Section 4 and finally, in Section 5, the conclusion is given.
Eq. (2).
food source to exploit depending on this information, probabilisti- chooses the best one, xNbest to improve. Including x , if there are S
m
m
cally. solutions, the best solution is described by Eq. (7) in Nm .
In ABC algorithm, using the fitness values of the solutions, the best
1 2
S
probability value pm can be calculated by Eq. (4). fit xNm
= max fit xN m
, fit xN m
, . . ., fit xN m
(7)
Table 1
Test problems.
1
D 1 D
Ackley MN 30 [−32, 32] Fmin = 0 f (x) = 20 + e − 20 exp −0.2 D
xi 2 − exp D
cos(2xi )
i=1 i=1
D
Schwefel MS 30 [−500, 500] Fmin = −12, 569.5 f (x) = − xi sin( |xi |)
i=1
1
SixHumpCamelBack MN 2 [−5, 5] Fmin = −1.03163 f (x) = 4x1 2 − 2.1x1 4 + x 6+ x1 x2 − 4x2 2 + 4x2 4
5.1
3 1
5
2 1
Branin MS 2 [−5, 10] × [0, 15] Fmin = 0.398 f (x) = x2 − x1 2 + x
1
−6 + 10 1 − 8
cos x1 + 10
42
Table 2
Performance comparison of qABC algorithms with different r values on Rosenbrock function.
Table 3
Performance comparison of qABC algorithms with different r values on Griewank function.
qABC(r = 0) 0 0 0 0
qABC(r = 0.25) 0 0 0 0
qABC(r = 0.5) 0 0 0 1.3322676E−15
qABC(r = 1) 0 0 0 0
qABC(r = 1.5) 0 0 0 2.1094237E−15
qABC(r = 2) 0 0 0 0
qABC(r = 2.5) 1.1139238E−15 3.6522213E−15 0 1.9317881E−14
qABC(r = 3) 5.8878828E−15 3.1233220E−14 0 1.7408297E−13
qABC(r =∞) 0 0 0 0
Wilcoxon statistical test was also carried out for standard ABC two of them are unimodal-nonseperable (UN), three of them are
and qABC algorithms. 10 well-known benchmark problems with multimodel-seperable (MS) and four of them are multimodel-
different characters were considered in order to test the perfor- nonseperable (MN).
mance of qABC. These test problems, characteristic of the functions For each test case, 30 independent runs were carried out
(C), dimensions of the problems (D), bounds of the search spaces with random seeds. The values below the E−15 are accepted as
and the global optimum values for these problems are presented 0. Tables 2–9 show the mean values (mean) and the standard
in Table 1. One of these benchmarks is unimodal-seperable (US), deviation values (SD) calculated for the test problems over 30 runs.
Table 4
Performance comparison of qABC algorithms with different r values on Schaffer function.
Table 5
Performance comparison of qABC algorithms with different r values on Dixon-Price function.
Table 6
Performance comparison of qABC algorithms with different r values on Ackley function.
Table 7
Performance comparison of qABC algorithms with different r values on Schwefel function.
Table 8
Performance comparison of qABC algorithms with different r values on SixHumpCamelBack function.
Table 9
Performance comparison of qABC algorithms with different r values on Branin function.
Table 11
7.8289333E−06
2.5073039E−12
3.3608330E−12
9.7048440E−10
Wilcoxon signed rank test results.
0.1799131
Function Mean difference p-Value
Sphere 0 –
Rosenbrock 0.0437759 0.711
SD
0
0
0
0
Rastrigin 0 –
Griewank 0 –
1.1543113E−12
8.6610854E−06
Schaffer −8.66098E−06 0.000
Dixon-Price −1.15022E−12 0.000
0.1329198
−12,569.4866182
0.3978874
−1.0316284
Ackley 0 –
Schwefel 0 –
SixHumpCamelBack 0 –
0
0
0
0
Branin −5.90257E−10 0.000
Mean
qABC
2.0739670E−12
4.8286140E−10
0
0
0
0
0
0
0.3978874
−1.0316284
0
0
0
and qABC(r = 2.5) is the most successful one in terms of the best of 30
runs among the qABC algorithms with different r values. In Table 9,
2.538172
521.849292
5.036187
0.002958
0
0
E9
0
0
very close to the optimum result in all columns of the table except
SD
0.3978874
0.0014792
11.716728
18.203938
−1.031628
0
0
−10,266
close to optimum ones. And at least once in 30 runs, qABC finds the
DE
0.493867
457.957783
24.170196
0.020808
E8
0
0
presents more successful results in all fields of the table for both
SD
functions. And also, only qABC(r = 0) finds the optimum result for
43.9771369
0.1646224
0.6666667
0.3978874
0.0173912
−1.0316285
−6909.1359
function. For all r values, qABC presents similar results at the end
of the optimization process for Ackley function. qABC(r = 0) gener-
0
0
Mean
smaller than 1.5, qABC gives better values in the column “worst” of
Comparison of the qABC algorithm with state of art algorithms.
1.161455
0.178141
4.564860
93.254240
0.004763
3.85E+04
2.66E+02
0
0
1.22E+03
−1.03163
52.92259
10.63346
14.67178
similar for the r ≥ 1 valued qABC algorithms for these test problems
Mean
r > 1 for Schwefel function. When the graphics of the first six func-
tions are examined for the values of r smaller than 1, qABC(r = 0.5)
Dixon-Price
Rosenbrock
Schwefel
Rastrigin
Function
Schaffer
Ackley
Branin
4
x 10
12
qABC(r=0)
qABC(r=0.5)
qABC(r=1)
10 qABC(r=2)
qABC(r=3)
0
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
Evaluations
8
x 10
5
qABC(r=0)
4.5 qABC(r=0.5)
qABC(r=1)
qABC(r=2)
4
Objective Function Value
qABC(r=3)
qABC(r=infinite)
3.5
2.5
1.5
0.5
0
0 500 1000 1500 2000 2500 3000
Evaluations
between these two r values on Rosenbrock and Rastrigin functions. reminded that, when r = 0, Eq. (5) becomes equal to Eq. (2) and qABC
Since qABC converges to the optimal values in very early evalua- works like the standard ABC. Generally, when r ≥ 1, the standard
tions on Branin and SixHumpCamelBack functions for all r values, ABC requires at least two times more function evaluations than
comparing the speed of the convergence is meaningless through qABC to reach the same mean value. When the convergence graph-
the optimization process. Actually, it can be said that qABC has a ics and the results in the tables are evaluated, it can be generalised
very successful convergence performance on these two 2 dimen- that a value around 1 for r is an appropriate value for qABC algo-
sional test problems for all considered r values. rithm. So, in the comparison of qABC with the state of algorithms
These graphics show that the parameter r is one of the main (GA, PSO, DE, ABC), the results of qABC having the parameter r = 1
factors for the convergence speed of qABC algorithm. It should be was used. This comparison results are given in Table 10. In the
600
qABC(r=0)
qABC(r=0.5)
qABC(r=1)
500 qABC(r=2)
qABC(r=3)
Objective Function Value
qABC(r=infinite)
400
300
200
100
0
0 0.5 1 1.5 2 2.5 4
Evaluations x 10
1000
qABC(r=0)
qABC(r=0.5)
900 qABC(r=1)
qABC(r=2)
800 qABC(r=3)
600
500
400
300
200
100
0
0 1000 2000 3000 4000 5000
Evaluations
0.7
qABC(r=0)
qABC(r=0.5)
0.6 qABC(r=1)
qABC(r=2)
Objective Function Value
qABC(r=3)
qABC(r=infinite)
0.5
0.4
0.3
0.2
0.1
0
0 200 400 600 800 1000 1200
Evaluations
table, the mean of 30 independent runs and the standard devia- smaller objective function values than other compared algorithms,
tions are presented for considered problems. For a fair comparison and the best mean and SD values belong to the qABC algorithm.
the table values below E−12 are accepted as 0 as in [31]. qABC and ABC algorithms find optimum results for Rastrigin and
When Table 10 is examined, it can be seen that GA has the Griewank functions while other algorithms do not. Performance
worst performance among the considered algorithms for all prob- of the PSO and DE on Griewank is better than that on Rastrigin
lems except Schwefel, SixHumpCamelBack and Branin. It has the function. However GA’s performance does not show a remarkable
best performance on SixHumpCamelBack and Branin functions. difference on these test functions. PSO and DE algorithms achieve
However, there is not a really remarkable difference between the the optimum results in given number of function evaluations,
performance of the algorithms, we can say that all algorithms while ABC, qABC and GA orderly converge to the optimum value
perform well on SixHumpCamelBack and Branin test functions. On with some errors on Schaffer function. qABC and ABC algorithms
Rosenbrock function, the standard ABC and qABC algorithms find present excellent performance, however other algorithms do not
6
x 10
4
qABC(r=0)
qABC(r=0.5)
3.5 qABC(r=1)
qABC(r=2)
qABC(r=3)
Objective Function Value
3 qABC(r=infinite)
2.5
1.5
0.5
0
0 500 1000 1500 2000 2500 3000
Evaluations
25
qABC(r=0)
qABC(r=0.5)
qABC(r=1)
qABC(r=2)
20 qABC(r=3)
15
10
0
0 0.5 1 1.5 2 2.5
4
Evaluations x 10
2000
qABC(r=0)
qABC(r=0.5)
0 qABC(r=1)
qABC(r=2)
qABC(r=3)
Objective Function Value
qABC(r=infinite)
−2000
−4000
−6000
−8000
−10000
−12000
0 1 2 3 4 5 6 7 8 4
x 10
Evaluations
provide so good results on Schwefel and Dixon-Price functions. mean values for six of ten problems among the compared opti-
Both ABC algorithms give the same mean value which is very close mization algorithms. So, in order to compare the performance of
to optimum for Schwefel function and ABC hits the optimum mean the qABC and ABC, the Wilcoxon signed rank test was used in this
value of Dixon-Price while qABC does not. For Ackley problem, all paper. Wilcoxon test is a nonparametric statistical test that can be
of the algorithms find the optimum results except GA and PSO. used for the analyzing the behaviour of evolutionary algorithms
PSO’s result is closer to 0 than the result of the GA. [33]. The test results are shown in Table 11. The first column of
The table clearly shows that ABC and qABC algorithms clearly the table presents the test functions, and the second column gives
outperform GA, PSO and DE in these conditions on the consid- the mean difference between the results of ABC and qABC. The
ered test problems. However, it is not very clear that there is a last column gives the p value that is an important determiner of
significant difference between the performances of the two ABC the test. Since the mean difference column value is 0 for six text
algorithms which produce very similar results and present the best functions, there are four test problems that the significance of the
1400
qABC(r=0)
qABC(r=0.5)
1200 qABC(r=1)
qABC(r=2)
Objective Function Value
qABC(r=3)
1000 qABC(r=infinite)
800
600
400
200
−200
0 10 20 30 40 50 60 70 80 90 100
Evaluations
70
qABC(r=0)
qABC(r=0.5)
qABC(r=1)
60 qABC(r=2)
40
30
20
10
0
0 10 20 30 40 50 60 70 80 90 100
Evaluations
difference between the performances of the algorithms can be dis- bees. However, it should be noticed that the increment rate on the
cussed. Among the four test problem, the p value is different from complete computing time of qABC is lower than the increment
0 for only Rosenbrock. So, only for Rosenbrock problem there is rate of the dimension, like ABC algorithm. So, it can be indicated
not enough evidence to reject the null hypothesis (0.711 > 0.05). that there is not a strict dependence between the dimension of the
These tests show that in these conditions, the performance of ABC problem and the complexities of qABC and ABC.
algorithm is significantly better than qABC for other three test
functions (Schaffer, Dixon-Price and Branin). It should be empha- 4.2. Experiments on “colony size”
sised that these tests are based on the final results obtained by the
algorithms. For the experiments in this section, four test functions from
Generally, we could interpret the simulation and test results Table 1 were selected. Each function has different character. These
as, when Eq. (5) is used for onlookers to produce new solutions benchmarks are: Sphere, Rosenbrock, Rastrigin and Griewank func-
with r ≥ 1, the local convergence performance of ABC is signifi- tions. In the experiments, r was set as 1. The same parameter setting
cantly improved especially in the early cycles of the optimization was used with the previous experiments (maximum evaluation
process. So, a good tuning of the parameter r promises a superior number = 500,000).The limit value was calculated by using Eq. (9)
convergence performance for qABC algorithm. as indicated before. The qABC algorithm was tested on the men-
tioned functions for the several different colony size (CS) values:
4.1. Time complexity of ABC algorithms 4, 6, 12, 24, 50, 100, 200 and the results of the experiments are
presented in Table 13.
In this section, a time complexity analysing process is carried The table shows that qABC algorithm gives the optimal results in
out for ABC and qABC algorithms on Rosenbrock function. In order all fields without being influenced by changing the CS for the values
to present the time complexity’s relationship with the dimension CS > 6 and CS > 12 on Sphere and Rastrigin functions, respectively.
of the problem, the complexities are calculated for the dimen- The optimal values are also found by the algorithm on Griewank
sions: 10, 30 and 50 as described in [32]. The results are shown function with CS = 50 and CS = 200. Although the algorithm finds
in Table 12 for ABC and qABC algorithms. These analyses were per- the results very close to the optimum for CS = 100, when CS < 50,
formed on Windows 7 Professional (SP1) on a Intel(R) Core(TM) i7 there is not efficient convergence to the optimum for Griewank
M640 2.80 GHz processor with 8 GB RAM and the algorithms were test function. Considering the standard deviations, the mean values
coded by using C Sharp programming language and.net framework are very similar for Rosenbrock problem for the intervals of the CS
3.5 was used. values 24–200 and 6–12. However, when CS = 4, the results of qABC
The code execution time of this system was obtained and significantly get worse on Rosenbrock function.
demonstrated in the table as T0 . Also, the computing time for Rosen-
brock function for 200,000 function evaluations is presented as T1 in 4.3. Experiments on “limit”
the table. Each of the algorithms was run 5 times for 200,000 func-
tion evaluations and the average computing time of the algorithms The same test functions in the previous section were used to
are presented as T̂2 . The algorithm complexities were calculated by test the qABC algorithm for different limit values (10, 50, 187,
(T̂2 − T1 )/T0 . The time complexity of the qABC is higher than ABC 375, 750, 1500) to observe the relation between the parameter
algorithm since there is an additional part in the phase of onlooker limit and the performance of the algorithm for this algorithm. The
Table 12
Time complexities of the ABC and qABC algorithms on Rosenbrock function.
Table 14
Effect of the limit on the performance of qABC algorithm.
237
238 D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238
same parameter setting used in the previous experiments (colony [8] M. Subotic, M. Tuba, N. Stanarevic, Different approaches in parallelization of
size = 50 and the maximum evaluation number = 500,000) were the artificial bee colony algorithm, Int. J. Math. Model. Method Appl. Sci. 5 (4)
(2011) 755–762.
used. In terms of the mean and standard deviation of 30 inde- [9] G. Zhu, S. Kwong, Gbest-guided artificial bee colony algorithm
pendent runs, the simulation results are given in Table 14. On for numerical function optimization, Appl. Math. Comput. (2010),
Griewank, Sphere and Rastrigin functions, the results get better http://dx.doi.org/10.1016/j.amc.2010.08.049.
[10] X. Xu, X. Lei, Multiple sequence alignment based on ABC SA, in: Proceedings
as the limit values increase. However, the algorithm achieves the of the Artificial Intelligence and Computational Intelligence, Lecture Notes in
optimum results when the limit value is l ≥ 750 for the Griewank Computer Science, vol. 6320, 2010, pp. 98–105.
function and l ≥ 375 for the Sphere and Rastrigin functions. When [11] M. Tuba, N. Bacanin, N. Stanarevic, Guided artificial bee colony algorithm,
in: Proceedings of the European Computing Conference (ECC11), 2011, pp.
the standard deviation is considered, the difference between the
398–403.
mean objective function values produced for different limit values [12] G. Li, P. Niu, X. Xiao, Development and investigation of efficient artificial
looks very small on Rosenbrock function except the smallest limit bee colony algorithm for numerical function optimization, Appl. Soft Comput.
(2011), http://dx.doi.org/10.1016/j.asoc.2011.08.040.
value, 10.
[13] A. Banharnsakun, B. Sirinaovakul, T. Achalakul, Job shop scheduling with the
The results of these experiments showed that, 750 which is best-so-far abc, Eng. Appl. Artif. Intell. 25 (3) (2012) 583–593.
equal to the value calculated by Eq. (9) is a suitable value as the [14] X. Bi, Y. Wang, An improved artificial bee colony algorithm, in: Proceedings
limit parameter. of the 3rd International Conference on Computer Research and Development
(ICCRD), vol. 2, 2011, pp. 174–177.
[15] W.F. Gao, S.Y. Liu, A modified artificial bee colony algorithm, Comput. Oper.
5. Conclusion Res. 39 (3) (2012) 687–697.
[16] E. Mezura-Montes, O. Cetina-Dominguez, Empirical analysis of a modified arti-
ficial bee colony for constrained numerical optimization, Appl. Math. Comput.
In this paper a new definition for the behaviour of the onlooker 218 (22) (2012) 10943–10973.
bees of ABC algorithm was presented and a novel version of [17] N. Bacanin, M. Tuba, Artificial bee colony (ABC) algorithm for constrained opti-
ABC called quick ABC (qABC) was described. Experimental studies mization improved with genetic operators, Stud. Inf. Control 21 (2) (2012)
137–146.
showed that the new definition significantly improves the conver- [18] W.F. Gao, S.Y. Liu, F. Jiang, An improved artificial bee colony algorithm for direc-
gence performance of the standard ABC when the neighbourhood ting orbits of chaotic systems, Appl. Math. Comput. 218 (7) (2011) 3868–3879.
radius r is set appropriately. [19] W.F. Gao, S.Y. Liu, L.L. Huang, A global best artificial bee colony algorithm for
global optimization, J. Comput. Appl. Math. 236 (11) (2012) 2741–2753.
The performance of the qABC algorithm was compared with the [20] Y. Liu, X.X. Ling, Y. Liang, G.H. Liu, Improved artificial bee colony algorithm with
standard ABC and the state of art algorithms. The results showed mutual learning, J. Syst. Eng. Electron. 23 (2) (2012) 265–275.
that, qABC algorithm presents promising results for considered [21] F. Kang, J. Li, Q. Xu, Hybrid simplex artificial bee colony algorithm and its
application in material dynamic parameter back analysis of concrete dams,
problems. In order to analyse the effect of the parameters limit and
J. Hydraul. Eng. 40 (6) (2009) 736–742.
colony size on the performance of the qABC algorithm, some exper- [22] F. Kang, J. Li, Q. Xu, Structural inverse analysis by hybrid simplex artificial bee
iments were also conducted. Moreover, time complexity analyses colony algorithms, Comput. Struct. 87 (13–14) (2009) 861–870.
[23] Y. Marinakis, M. Marinaki, N. Matsatsinis, A hybrid discrete artificial bee colony
were carried out for ABC and qABC algorithms.
grasp algorithm for clustering, in: Proceedings of the International Confer-
In the future, the adaptation of the parameter r can be studied ence on Computers and Industrial Engineering (CIE: 2009), vols. 1–3, 2009,
to improve the performance of qABC. It is also noted that qABC pp. 548–553.
algorithm can be used for all type optimization problems, such as [24] R. Xiao, T. Chen, Enhancing ABC optimization with Ai-net algorithm for solving
project scheduling problem, in: Proceedings of the 7th International Confer-
binary, combinatorial, integer optimization problems. ence on Natural Computation (ICNC), vol. 3, 2011, pp. 1284–1288.
[25] W. Bin, C.H. Qian, Differential artificial bee colony algorithm for global numer-
References ical optimization, J. Comput. 6 (5) (2011) 841–848.
[26] T.K. Sharma, M. Pant, Differential operators embedded artificial bee colony
algorithm, Int. J. Appl. Evol. Comput. 2 (3) (2011) 1–14.
[1] D. Karaboga, Artificial bee colony algorithm, Scholarpedia 5 (3) (2010) 6915 [27] F. Kang, J. Li, Z. Ma, H. Li, Artificial bee colony algorithm with local search for
www.scholarpedia.org/article/Artificial bee colony algorithm numerical optimization, J. Softw. 6 (3) (2011) 490–497.
[2] D. Karaboga, An Idea Based on Honey Bee Swarm for Numerical Optimiza- [28] T.J. Hsieh, H.F. Hsiao, W.C. Yeh, Mining financial distress trend data using
tion, Technical Report-TR06, Erciyes University, Engineering Faculty, Computer penalty guided support vector machines based on hybrid of particle swarm
Engineering Department, 2005. optimization and artificial bee colony algorithm, Neurocomputing 82 (2012)
[3] D. Karaboga, B. Gorkemli, C. Ozturk, N. Karaboga, A comprehensive survey: 196–206.
artificial bee colony (ABC) algorithm and applications, Artif. Intell. Rev. (2012), [29] A. Abraham, R.K. Jatoth, A. Rajasekhar, Hybrid differential artificial bee colony
http://dx.doi.org/10.1007/s10462-012-9328-0. algorithm, J. Comput. Theor. Nanosci. 9 (2) (2012) 249–257.
[4] P.W. Tsai, J.S. Pan, B.Y. Liao, S.C. Chu, Enhanced artificial bee colony [30] D. Karaboga, B. Gorkemli, A quick artificial bee colony – qABC – algorithm
optimization, Int. J. Innov. Comput. Inf. Control 5 (12) (2009) for optimization problems, in: Proceedings of the 2012 International Sympo-
5081–5092. sium on Innovations in Intelligent Systems and Applications (INISTA), 2–4 July,
[5] H. Narasimhan, Parallel artificial bee colony (PABC) algorithm, in: Proceedings Turkey, 2012, http://dx.doi.org/10.1109/INISTA.2012.6247010.
of the World Congress on Nature Biologically Inspired Computing (NaBIC 2009), [31] D. Karaboga, B. Akay, A comparative study of artificial bee colony algorithm,
2009, pp. 306–311. Appl. Math. Comput. 214 (1) (2009) 108–132.
[6] M. Subotic, M. Tuba, N. Stanarevic, Parallelization of the artificial bee colony [32] P.N. Suganthan, N. Hansen, J.J. Liang, K. Deb, Y.P. Chen, A. Auger, S. Tiwari, Prob-
(ABC) algorithm, in: Proceedings of the 11th WSEAS International Conference lem Definitions and Evaluation Criteria for the CEC 2005 Special Session on
on Neural Networks and 11th WSEAS International Conference on Evolution- Real-parameter Optimization, Technical Report, Nanyang Technological Uni-
ary Computing and 11th WSEAS International Conference on Fuzzy Systems, versity, Singapore, 2005.
World Scientific and Engineering Academy and Society (WSEAS), Stevens Point, [33] S. Garcia, D. Molina, M. Lozano, F. Herrera, A study on the use of non-parametric
Wisconsin, USA, 2010, pp. 191–196. tests for analyzing the evolutionary algorithms behaviour: a case study on the
[7] W. Zou, Y. Zhu, H. Chen, X. Sui, A clustering approach using coop- CEC2005 special session on real parameter optimization, J. Heuristics 15 (6)
erative artificial bee colony algorithm, Discret. Dyn. Nat. Soc. (2010), (2009) 617–644.
http://dx.doi.org/10.1155/2010/459796.