You are on page 1of 12

Applied Soft Computing 23 (2014) 227–238

Contents lists available at ScienceDirect

Applied Soft Computing


journal homepage: www.elsevier.com/locate/asoc

A quick artificial bee colony (qABC) algorithm and its performance on


optimization problems
Dervis Karaboga, Beyza Gorkemli ∗
Erciyes University, Engineering Faculty, Intelligent Systems Research Group, Kayseri, Turkey

a r t i c l e i n f o a b s t r a c t

Article history: Artificial bee colony (ABC) algorithm inspired by the foraging behaviour of the honey bees is one of
Received 5 March 2013 the most popular swarm intelligence based optimization techniques. Quick artificial bee colony (qABC)
Received in revised form 28 March 2014 is a new version of ABC algorithm which models the behaviour of onlooker bees more accurately and
Accepted 22 June 2014
improves the performance of standard ABC in terms of local search ability. In this study, the qABC method
Available online 28 June 2014
is described and its performance is analysed depending on the neighbourhood radius, on a set of bench-
mark problems. And also some analyses about the effect of the parameter limit and colony size on
Keywords:
qABC optimization are carried out. Moreover, the performance of qABC is compared with the state of
Optimization
Swarm intelligence art algorithms’ performances.
Artificial bee colony © 2014 Elsevier B.V. All rights reserved.
Quick artificial bee colony

1. Introduction gbest-guided ABC [9]. They placed the global best solution into the
solution search equation. For multiple sequence alignment prob-
Optimization is an issue of finding the best solutions via an lem, Xu and Lei introduced Metropolis acceptance criteria into the
objective in the search space of a problem. In order to solve the searching process of ABC to prevent the algorithm from sliding into
real world optimization problems – especially NP hard problems local optimum [10]. They called the improved version of ABC as
– evolutionary computation (EC) based optimization methods that ABC SA. Tuba et al. improved GABC algorithm that integrates ABC
consist of evolutionary algorithms and swarm intelligence based optimization with self-adaptive guidance [11]. Li et al. used inertia
algorithms are frequently preferred. A swarm intelligence based weight and acceleration coefficients to get a better performance
algorithm models an intelligent behaviour/behaviours of social on the searching process of ABC algorithm [12]. A new scheduling
creatures that can be characterised as an intelligent swarm and method based on best-so-far ABC for solving the JSSP was proposed
this model can be used for searching the optimal solutions of vari- by Banharnsakun et al. [13]. In this method solution direction is
ous engineering problems. Artificial bee colony (ABC) algorithm is a biased toward the best-so-far solution rather than a neighbour one.
swarm intelligence based optimization technique that models the Bi and Wang presented a modification on the scouts’ behaviour
foraging behaviour of the honey bees in the nature [1]. This algo- in ABC algorithm [14]. They used a mutation strategy based on
rithm was first introduced to literature by Karaboga in 2005 [2]. opposition-based learning and called the new method as fast
Until today, ABC algorithm has been used in many applications. In mutation ABC. Inspired by differential evolution (DE) algorithm,
[3], a good survey study on ABC algorithm can be found. Gao and Liu presented a new solution search equation for standard
Although the standard ABC optimization generally produces ABC [15]. The new equation aims to improve the exploitation
successful results in many application studies, for getting a better capability and it is based on that the bee searches only around
performance from ABC algorithm, some researchers attempted to the best solution of the previous iteration. Mezura-Montes and
implement ABC in parallel [4–8] and some integrated the different Cetina-Dominguez presented a modified ABC to solve constrained
concepts of the other EC based methods with ABC algorithm [9–20]. numerical optimization problems [16]. They modified ABC algo-
Zhu and Kwong are influenced by particle swarm optimization rithm with the selection mechanism, the scout bee operator and
(PSO) and they proposed an improved ABC algorithm named the equality and boundary constraints. For constrained optimiza-
tion problems, Bacanin and Tuba introduced some modifications
based on genetic algorithm operators to the ABC algorithm [17]. In
∗ Corresponding author. Tel.: +90 3522076666 32554. order to improve the exploitation capability of ABC algorithm, Gao
E-mail addresses: karaboga@erciyes.edu.tr (D. Karaboga), et al. presented an improved ABC [18]. In this improved version,
bgorkemli@erciyes.edu.tr (B. Gorkemli). they used a modified search strategy to generate new food source.

http://dx.doi.org/10.1016/j.asoc.2014.06.035
1568-4946/© 2014 Elsevier B.V. All rights reserved.
228 D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238

They used opposition based learning method and chaotic maps all of the bees in the hive work as scouts and they all start with
to produce the initial population for a better global convergence. random solutions or food sources. In further cycles, when the food
For a better exploitation, Gao et al. also presented a modified ABC sources are abandoned, the employed bee related to the abandoned
algorithm inspiring by DE [19]. With this modification, each bee resource becomes a scout. In the algorithm, a parameter, limit is
searches only around the best solution of the previous cycle to used to control the abandonment problem of the food sources. For
improve the exploitation. And they also used chaotic systems and each solution, the trial number of improvement is taken, in every
opposition based learning method while producing the initial pop- cycle the solution which has the maximum trial number is deter-
ulation and scout bees to improve the global convergence. Liu et al. mined and its trial number is compared with the parameter limit.
introduced an improved ABC algorithm with mutual learning [20]. If it achieve the limit value, this solution is leaved and searching
This approach adjusts the produced candidate food source using process continues with a randomly produced new solution.
some individuals which are selected by a mutual learning factor. In ABC algorithm, a food source position is defined as a possible
In order to get more powerful optimization technique, some solution and the nectar amount or quality of the food source corre-
researchers combined ABC algorithm with other EC based meth- sponds to the fitness of the related solution in optimization process.
ods or traditional algorithms. Kang et al. proposed a hybrid simplex Since, each employed bee is associated with one and only one food
ABC [21]. In [21], they combined ABC with Nelder–Mead sim- source, the number of employed bees is equal to the number of food
plex method and in another study, they used this new method for sources.
inverse analysis problems [22]. Marinakis et al. proposed a hybrid The general algorithmic structure of the ABC optimization is
algorithm based on ABC optimization and greedy randomised adap- given below:
tive search procedure in order to cluster n objects into k clusters Initialization phase
[23]. Another different hybrid ABC was described by Xiao and Chen REPEAT
by using artificial immune network algorithm [24]. They applied Employed bees phase
the hybrid algorithm to multi-mode resource constrained multi- Onlooker bees phase
project scheduling problem. Bin and Qian introduced a differential Scout bees phase
ABC algorithm for global numerical optimization [25]. Sharma and Memorize the best solution achieved so far
Pant used DE operators with standard ABC algorithm [26]. Kang UNTIL (cycle = maximum cycle number or a maximum CPU time)
et al. demonstrated how the standard ABC can be improved by
incorporating a hybridization strategy [27]. They suggested a novel 2.1. Initialization phase
hybrid optimization technique composed of Hooke Jeeves pattern
search and ABC algorithm. Hsieh et al. proposed a new hybrid algo- In the initialization phase, food sources are randomly initialised
rithm of PSO and ABC optimization [28]. Abraham et al. made a with Eq. (1) in a given range.
hybridization of ABC algorithm and DE strategy, and called this
novel approach as hybrid differential ABC algorithm [29]. xm,i = li + rand(0, 1) ∗ (ui − li ) (1)
In this work, instead of presenting a new hybrid ABC algorithm
where xm,i is the value of the i. dimension of the m. solution. li repre-
or integrating an operator of an existing algorithm into ABC, our
sents the lower and ui represents the upper bound of the parameter
aim is to model the behaviour of foragers in ABC more accurately
xm,i .
by introducing a new definition for onlooker bees. By using the new
Then, obtained solutions are evaluated and their objective func-
definition proposed for onlooker bees, ABC achieves a better per-
tion values are calculated.
formance than standard ABC in terms of local search ability. So, we
called our new algorithm as quick ABC (qABC) [30]. In this work, the
2.2. Employed bees phase
qABC algorithm is described in more detailed way and then its per-
formance is tested on a set of test problems larger than that in [30]
This phase of the algorithm describes the employed bees
and the effects of control parameters neighbourhood radius, limit
behaviour for finding a better food source within the neighbour-
and colony size on its performance are investigated. Also the perfor-
hood of the food source (xm ) in their minds. They leave the hive with
mance of qABC is compared with the state of art algorithms. The rest
the information of the food source position, but when they arrive
of the paper is organised as follows: Section 2 describes the stan-
the target point, they are affected from the traces of the other bees
dard ABC and the novel strategy (qABC) is presented in Section 3.
on the flowers and they find a candidate food source. In the ABC
The computational study and simulation results are demonstrated
optimization they determine this neighbour food source by using
in Section 4 and finally, in Section 5, the conclusion is given.
Eq. (2).

m,i = xm,i + m,i (xm,i − xk,i ) (2)


2. Standard ABC algorithm
where xk is a food source selected randomly. i is also a randomly
chosen dimension and m,i is a random number within the range
The artificial bees are divided in three groups considering the
[−1,1]. After producing the new candidate food source m , its pro-
foraging behaviour of the colony in ABC algorithm. The first group
fitability is calculated. Then, a greedy selection is applied between
consists employed bees. These bees have a food source position in
their mind when they leave from the hive. And they perform dances
m and xm .
The fitness of a solution fit(xm ) can be calculated from its objec-
about their food sources on the dancing area in the hive. Some of
tive function value f(xm ) by using Eq. (3).
the bees decide the food sources to exploit by watching the dances

of employed bees. This group of bees is called as onlookers. In the 1/(1 + f (xm )) if f (xm ) ≥ 0
algorithm, onlookers select the food sources in a probability that fit(xm ) = (3)
related to the qualities of the food sources. The last different bee 1 + abs(f (xm )) if f (xm ) < 0
group is scouts. Regardless of any information of other bees, a scout
finds a new food source and start to consume it, then she continues 2.3. Onlooker bees phase
her work as an employed bee. Hence, while the known resources are
consuming, at the same time exploration of the new food sources When employed bees return to the hive, they share their food
is provided. At the beginning of the search (initialization phase), source information with onlooker bees. An onlooker chooses her
D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238 229

food source to exploit depending on this information, probabilisti- chooses the best one, xNbest to improve. Including x , if there are S
m
m
cally. solutions, the best solution is described by Eq. (7) in Nm .
In ABC algorithm, using the fitness values of the solutions, the  best
  
1 2
   S
 
probability value pm can be calculated by Eq. (4). fit xNm
= max fit xN m
, fit xN m
, . . ., fit xN m
(7)

In order to determine a neighbour of xm , a more general and


fit(xm )
pm = SN (4) flexible definition can be used is given below:
m=1
fit(xm )
if d(m, j) ≤ r × mdm then xj is a neighbor of xm ,
After a food source is selected, as in the employed bees phase (8)
else not
a neighbour source m is determined by using Eq. (2), and its fit-
ness value is computed. Then, a greedy selection is applied between With this expression, a new parameter r which refers to the
m and xm . Therefore, recruiting more onlookers to richer sources, “neighbourhood radius” is added into the parameters of standard
positive feedback behaviour appears. ABC algorithm. This parameter must be used as r ≥ 0. In Eq. (8),
when r = 0, Eq. (5) works same as Eq. (2) and in this situation, qABC
turns to be the standard ABC, since xN best becomes x . While the
2.4. Scout bees phase m
m
value of r increases, the neighbourhood of xm enlarges or its neigh-
At the end of every cycle, trial counters of all solutions are bourhood shrinks as the value of r decreases.
controlled. And abandonment of the solution that has maximum Detailed steps of qABC algorithm are given below:
counter value is determined by looking limit parameter. If an aban-
donment is detected, the related employed bee is converted to a Initializationphase:
scout and takes a new randomly produced solution using Eq. (1). Initialize the control parameters: colony size CS, maximum
In this phase, to balance the positive feedback, a negative feedback number of cycles MaxNum, limit l.
behaviour arises. Initialize the positions of the food sources (initial solutions)
using Eq. (1), xm . m = 1, 2, . . ., SN.
Evaluate the solutions.
3. The novel definition for the searching behaviour of
Memorize the best solution.
onlookers (qABC algorithm)
c=0
repeat
In real honey bee colonies, an employed bee exploits the food
Employedbeesphase: for each employed bee;
source that she visited before, but an onlooker chooses a food source
Generate a new candidate solution m in the neighborhood of
region depending on the dances of the employed bees. After reach-
xm (Eq. (2)) and evaluate it.
ing that region where she visits for the first time, the onlooker bee
Apply a greedy selection between xm and m .
examines the food sources in the area and chooses the fittest one to
Compute the fitness of the solutions by using Eq. (3) and calcu-
exploit. So, it can be said that onlookers choose their food sources
late the probability values pm for the solutions xm with Eq. (4).
in a different way from employed bees. However, in standard ABC
Onlookerbeesphase: for each onlooker bee;
algorithm, this difference is not considered and artificial employed
Select a solution xm depending on pm values
bees and onlookers determine a new candidate solution by using best among the neighbours of the x
Find the best solution xN m
the same formula (Eq. (2)). Onlookers behaviour should be mod- m
and itself. These neighbours are determined by expression (8)
elled by a different formula from Eq. (2). So, in qABC algorithm, a
Generate a new candidate solution best Nm
from xN best (Eq. (5))
new definition is introduced for the behaviour of onlookers. This m
and evaluate it.
novel definition is given as follows: best best
Apply a greedy selection between xNm and Nm .
  Memorize the best solution found so far.
best
Nm ,i
best
= xN
m ,i
best
+ m,i xN
m ,i
− xk,i (5)
Scoutbeephase: using the limit parameter value l, determine the
In this formula, xN best represents the best solution among the abandoned solution. If exists, replace it with a new solution for the
m
neighbours of xm and itself (Nm ). A similarity measure in terms of scout by using Eq. (1).
structure of solutions can be used to determine a neighbourhood c=c+1
for xm . At this point, in order to define an appropriate neighbour- until (c = MaxNum)
hood, different approaches could be used. And also, for different
representations of the solutions, different similarity measures can 4. Computational study and discussion
be defined. Hence, using this novel formula, Eq. (5), combinatorial
or binary problems could be optimised by using qABC algorithm, We conducted the experiments with different values of r (r = 0,
too. As an instance, the neighbourhood of a solution (xm ) could be r = 0.25, r = 0.5, r = 1, r = 1.5, r = 2, r = 2.5, r = 3, r =∞) and the effect
defined considering the mean Euclidean distance between xm and of this parameter was analysed in means of the convergence
the rest of solutions for the numerical optimization problems. Rep- performance and the quality of the solutions obtained by the
resenting the Euclidean distance between xm and xj as d(m, j), the algorithm. Results of qABC were compared with the state of art
mean Euclidean distance for xm , mdm , is calculated by Eq. (6). algorithms including genetic algorithm (GA), particle swarm opti-
mization (PSO), differential evolution (DE) algorithm and standard
SN
d(m, j) ABC. The results of GA, PSO and DE are taken from [31]. For a
j=1
mdm = (6) fair comparison the same parameter settings and the evaluation
SN − 1
number are used as in [31]. So, the colony size is 50 and maxi-
If a solution of which Euclidean distance from xm is less than the mum evaluation number is 500,000. For the scout process, the limit
mean Euclidean distance, mdm , it could be accepted as a neighbour parameter l is calculated with Eq. (9) [31]:
of xm . It means that, an onlooker bee watches the dances of the
CS ∗ D
employed bees in the hive and being effected by them, she selects l= (9)
2
the region which is centred by the food source xm . When she arrives
the region of xm , she examines all of the food sources in Nm and where CS is the colony size and D is the dimension of the problem.
230 D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238

Table 1
Test problems.

Test function C D Interval Min Formulation


D 2
Sphere US 30 [−100, 100] Fmin = 0 f (x) = xi
i=1
D−1 2 2
Rosenbrock UN 30 [−30, 30] Fmin = 0 f (x) = 100(xi+1 − xi 2 ) + (xi − 1)
i=1
D
f (x) = (xi 2 − 10 cos(2xi ) + 10)
D 2 D  
Rastrigin MS 30 [−5.12, 5.12] Fmin = 0
i=1
xi
Griewank MN 30 [−600, 600] Fmin = 0 f (x) = 1
4000
( xi ) − cos √ +1
i=1 i=1 i
D
sin2 ( xi 2 )−0.5
i=1
Schaffer MN 2 [−100, 100] Fmin = 0 f (x) = 0.5 + D 2 2

(1+0.001( xi ))
i=1
2 D 2
Dixon-Price UN 30 [−10, 10] Fmin = 0 f (x) = (x1 − 1) + i(2xi − xi−1 )
2
i=2

1
D  1 D 
Ackley MN 30 [−32, 32] Fmin = 0 f (x) = 20 + e − 20 exp −0.2 D
xi 2 − exp D
cos(2xi )
i=1 i=1
D
Schwefel MS 30 [−500, 500] Fmin = −12, 569.5 f (x) = − xi sin( |xi |)
i=1
1
SixHumpCamelBack MN 2 [−5, 5] Fmin = −1.03163 f (x) = 4x1 2 − 2.1x1 4 + x 6+ x1 x2 − 4x2 2 + 4x2 4
 5.1
3 1
5
2  1

Branin MS 2 [−5, 10] × [0, 15] Fmin = 0.398 f (x) = x2 − x1 2 + x
 1
−6 + 10 1 − 8
cos x1 + 10
42

Table 2
Performance comparison of qABC algorithms with different r values on Rosenbrock function.

Algorithm Mean SD Best Worst

qABC(r = 0) 0.1766957 0.2661797 0.0005833 1.0439761


qABC(r = 0.25) 0.1464777 0.2563342 0.0004032 1.0708763
qABC(r = 0.5) 0.1066600 0.2173580 0.0001952 1.1410752
qABC(r = 1) 0.1329198 0.1799131 0.0001488 0.8556796
qABC(r = 1.5) 0.0886332 0.1181011 0.0010554 0.5050920
qABC(r = 2) 0.1391871 0.2204129 0.0004161 0.9241398
qABC(r = 2.5) 0.0976565 0.1918210 3.4632951E−05 0.9517281
qABC(r = 3) 0.1319018 0.1821670 0.0016222 0.6486930
qABC(r =∞) 0.1194790 0.2266048 9.2660744E−05 0.9657111

The best values are written in bold.

Table 3
Performance comparison of qABC algorithms with different r values on Griewank function.

Algorithm Mean SD Best Worst

qABC(r = 0) 0 0 0 0
qABC(r = 0.25) 0 0 0 0
qABC(r = 0.5) 0 0 0 1.3322676E−15
qABC(r = 1) 0 0 0 0
qABC(r = 1.5) 0 0 0 2.1094237E−15
qABC(r = 2) 0 0 0 0
qABC(r = 2.5) 1.1139238E−15 3.6522213E−15 0 1.9317881E−14
qABC(r = 3) 5.8878828E−15 3.1233220E−14 0 1.7408297E−13
qABC(r =∞) 0 0 0 0

Wilcoxon statistical test was also carried out for standard ABC two of them are unimodal-nonseperable (UN), three of them are
and qABC algorithms. 10 well-known benchmark problems with multimodel-seperable (MS) and four of them are multimodel-
different characters were considered in order to test the perfor- nonseperable (MN).
mance of qABC. These test problems, characteristic of the functions For each test case, 30 independent runs were carried out
(C), dimensions of the problems (D), bounds of the search spaces with random seeds. The values below the E−15 are accepted as
and the global optimum values for these problems are presented 0. Tables 2–9 show the mean values (mean) and the standard
in Table 1. One of these benchmarks is unimodal-seperable (US), deviation values (SD) calculated for the test problems over 30 runs.

Table 4
Performance comparison of qABC algorithms with different r values on Schaffer function.

Algorithm Mean SD Best Worst

qABC(r = 0) 1.0367306E−10 4.8286140E−10 0 2.6913450E−09


qABC(r = 0.25) 3.3495204E−07 7.2781864E−07 5.0494242E−10 3.5700525E−06
qABC(r = 0.5) 2.3833678E−06 2.9895980E−06 1.7145336E−08 1.1465332E−05
qABC(r = 1) 8.6610854E−06 7.8289333E−06 3.4646886E−08 3.1484602E−05
qABC(r = 1.5) 1.1965610E−05 1.8307200E−05 1.6412526E−07 8.8334161E−05
qABC(r = 2) 1.2654317E−05 1.5067672E−05 1.1556006E−08 5.1342794E−05
qABC(r = 2.5) 1.0554241E−05 1.2934532E−05 1.6327515E−08 6.4734260E−05
qABC(r = 3) 5.4603177E−06 6.9975635E−06 1.0385392E−08 3.4239721E−05
qABC(r =∞) 7.4164064E−06 8.4686549E−06 8.9004138E−08 3.4904411E−05
D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238 231

Table 5
Performance comparison of qABC algorithms with different r values on Dixon-Price function.

Algorithm Mean SD Best Worst

qABC(r = 0) 4.0922605E−15 0 2.2791451E−15 5.6569644E−15


qABC(r = 0.25) 7.9822146E−14 2.0983737E−13 3.6601337E−15 1.1324986E−12
qABC(r = 0.5) 2.0912869E−13 6.1424217E−13 2.3120888E−15 3.1851131E−12
qABC(r = 1) 1.1543113E−12 3.3608330E−12 7.6586127E−15 1.7660988E−11
qABC(r = 1.5) 4.8758038E−10 1.7748020E−09 7.1680057E−15 8.4398125E−09
qABC(r = 2) 1.7512132E−11 3.7392269E−11 4.8438112E−14 1.6990125E−10
qABC(r = 2.5) 4.7203508E−11 1.5378112E−10 3.1991671E−14 8.0074050E−10
qABC(r = 3) 4.5301684E−12 1.3752168E−11 7.5714139E−15 7.6488374E−11
qABC(r =∞) 1.1022304E−09 5.7130716E−09 3.9034247E−14 3.1861559E−08

Table 6
Performance comparison of qABC algorithms with different r values on Ackley function.

Algorithm Mean SD Best Worst

qABC(r = 0) 3.0819791E−14 2.3206228E−15 2.7977620E−14 3.8635761E−14


qABC(r = 0.25) 3.3306691E−14 4.5635433E−15 2.0872193E−14 3.8635761E−14
qABC(r = 0.5) 3.2359300E−14 3.5150128E−15 2.7977620E−14 3.8635761E−14
qABC(r = 1) 3.5556743E−14 3.6385215E−15 2.7977620E−14 3.8635761E−14
qABC(r = 1.5) 3.5319895E−14 3.9914263E−15 2.7977620E−14 4.2188475E−14
qABC(r = 2) 3.6148862E−14 5.1196893E−15 2.0872193E−14 4.2188475E−14
qABC(r = 2.5) 3.4964624E−14 3.7242352E−15 2.7977620E−14 4.2188475E−14
qABC(r = 3) 3.5201471E−14 4.0489854E−15 2.7977620E−14 4.2188475E−14
qABC(r =∞) 3.4964624E−14 4.3495560E−15 2.7977620E−14 4.2188475E−14

Table 7
Performance comparison of qABC algorithms with different r values on Schwefel function.

Algorithm Mean SD Best Worst

qABC(r = 0) −12,569.4866182 2.0739670E−12 −12,569.4866182 −12,569.4866182


qABC(r = 0.25) −12,569.4866182 1.8189894E−12 −12,569.4866182 −12,569.4866182
qABC(r = 0.5) −12,569.4866182 1.9926030E−12 −12,569.4866182 −12,569.4866182
qABC(r = 1) −12,569.4866182 2.5073039E−12 −12,569.4866182 −12,569.4866182
qABC(r = 1.5) −12,569.4866182 2.6359661E−12 −12,569.4866182 −12,569.4866182
qABC(r = 2) −12,569.4866182 2.4404304E−12 −12,569.4866182 −12,569.4866182
qABC(r = 2.5) −12,569.4866182 2.2277979E−12 −12,569.4866182 −12,569.4866182
qABC(r = 3) −12,569.4866182 2.4404304E−12 −12,569.4866182 −12,569.4866182
qABC(r =∞) −12,569.4866182 2.1522573E−12 −12,569.4866182 −12,569.4866182

Table 8
Performance comparison of qABC algorithms with different r values on SixHumpCamelBack function.

Algorithm Mean SD Best Worst

qABC(r = 0) −1.0316284 0 −1.0316284 −1.0316284


qABC(r = 0.25) −1.0316284 0 −1.0316284 −1.0316284
qABC(r = 0.5) −1.0316284 0 −1.0316284 −1.0316284
qABC(r = 1) −1.0316284 0 −1.0316284 −1.0316284
qABC(r = 1.5) −1.0316284 5.9542190E−15 −1.0316284 −1.0316284
qABC(r = 2) −1.0316284 3.9448606E−15 −1.0316284 −1.0316284
qABC(r = 2.5) −1.0316284 2.0726806E−15 −1.0316284 −1.0316284
qABC(r = 3) −1.0316284 1.1523523E−15 −1.0316284 −1.0316284
qABC(r =∞) −1.0316284 0 −1.0316284 −1.0316284

Table 9
Performance comparison of qABC algorithms with different r values on Branin function.

Algorithm Mean SD Best Worst

qABC(r = 0) 0.3978874 0 0.3978874 0.3978874


qABC(r = 0.25) 0.3978874 5.3906531E−12 0.3978874 0.3978874
qABC(r = 0.5) 0.3978874 4.4721075E−11 0.3978874 0.3978874
qABC(r = 1) 0.3978874 9.7048440E−10 0.3978874 0.3978874
qABC(r = 1.5) 0.3978874 1.6710417E−09 0.3978874 0.3978874
qABC(r = 2) 0.3978874 5.7917265E−09 0.3978874 0.3978874
qABC(r = 2.5) 0.3978874 3.0718302E−09 0.3978874 0.3978874
qABC(r = 3) 0.3978874 2.4476542E−09 0.3978874 0.3978874
qABC(r =∞) 0.3978874 2.9740517E−09 0.3978874 0.3978874
232 D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238

Table 11

7.8289333E−06
2.5073039E−12
3.3608330E−12

9.7048440E−10
Wilcoxon signed rank test results.

0.1799131
Function Mean difference p-Value

Sphere 0 –
Rosenbrock 0.0437759 0.711

SD

0
0
0

0
Rastrigin 0 –
Griewank 0 –

1.1543113E−12
8.6610854E−06
Schaffer −8.66098E−06 0.000
Dixon-Price −1.15022E−12 0.000

0.1329198

−12,569.4866182

0.3978874
−1.0316284
Ackley 0 –
Schwefel 0 –
SixHumpCamelBack 0 –
0

0
0
0
Branin −5.90257E−10 0.000
Mean
qABC

2.0739670E−12
4.8286140E−10

Moreover, objective function values of the best and the worst of 30


runs are given in these tables too. From the tables, it is very easy
0.2661797

to see similar mean and SD values presenting the performances of


qABC with different values of the parameter r.
SD

The qABC algorithm hits the optimum result for each of 30


0

0
0
0

0
0
0

independent runs on Sphere and Rastrigin problems for all r val-


1.0367306E−10

ues. Tables 7 and 8 present the effect of the parameter r on


−12,569.4866182
0.1766957

0.3978874
−1.0316284

Schwefel and SixHumpCamelBack functions, respectively. For all


r values, qABC can find optimum results in terms of mean, best and
worst of 30 runs for these test functions. The same performance
0

0
0
0

comparisons on Rosenbrock function is demonstrated in Table 2.


Mean

qABC(r = 1.5) produces more successful mean, SD and worst results


ABC

and qABC(r = 2.5) is the most successful one in terms of the best of 30
runs among the qABC algorithms with different r values. In Table 9,
2.538172

521.849292
5.036187

0.002958

the effect of the parameter r on Branin function is presented. For


all r values, qABC finds the same objective function value which is
0

0
0

E9
0
0

very close to the optimum result in all columns of the table except
SD

SD column on Branin function. Only qABC(r = 0) finds the 0 value


as standard deviation for this test function. For most of the r val-
0.6666667

0.3978874
0.0014792
11.716728
18.203938

−1.031628

ues, qABC finds optimum results in Table 3 for Griewank function.


Only qABC(r = 2.5 and r = 3) give different results from the optimum
0

0
0
−10,266

values on mean and SD columns. However, these results are very


Mean

close to optimum ones. And at least once in 30 runs, qABC finds the
DE

optimum solution of Griewank function. Tables 4 and 5 demon-


strate the performance results of qABC on Schaffer and Dixon-Price
11.728676

0.493867

457.957783
24.170196

0.020808

functions, respectively. These results show that, qABC is effected


by the value of the parameter r on these test functions. qABC(r = 0)
0

E8
0
0

presents more successful results in all fields of the table for both
SD

functions. And also, only qABC(r = 0) finds the optimum result for
43.9771369

0.1646224

0.6666667

0.3978874
0.0173912

−1.0316285

Schaffer function. Table 6 shows the results of the qABC on Ackley


15.088617

−6909.1359

function. For all r values, qABC presents similar results at the end
of the optimization process for Ackley function. qABC(r = 0) gener-
0

0
Mean

ates the best results in terms of mean and SD of 30 runs. When r is


PSO

smaller than 1.5, qABC gives better values in the column “worst” of
Comparison of the qABC algorithm with state of art algorithms.

the table on Ackley function.


74.214474

1.161455
0.178141
4.564860

93.254240
0.004763
3.85E+04

2.66E+02

Considering the objective function values and the evaluation


numbers, the convergence graphics of qABC with different r val-
ues are shown in Figs. 1–10 for Sphere, Rosenbrock, Rastrigin,
SD

0
0

Griewank, Schaffer, Dixon-Price, Ackley, Schwefel, SixHumpCamel-


Back and Branin, respectively. When these figures are examined,
0.397887
0.004239
1.11E+03
1.96E+05

1.22E+03
−1.03163
52.92259
10.63346
14.67178

the first six of them present a remarkable difference between qABCs


having r ≥ 1 and the other ones. The speed of the convergence looks
−11,593.4

similar for the r ≥ 1 valued qABC algorithms for these test problems
Mean

except Schwefel. Although qABC(r = 1) has a quicker convergence


GA

performance than qABC(r = 0) and qABC(r = 0.5), in early evalua-


tions its convergence performance is slower than in the case of
SixHumpCamelBack

r > 1 for Schwefel function. When the graphics of the first six func-
tions are examined for the values of r smaller than 1, qABC(r = 0.5)
Dixon-Price
Rosenbrock

performs softly better convergence than qABC(r = 0) for Griewank,


Griewank

Schwefel
Rastrigin
Function

Schaffer

Dixon-Price and Ackley functions. For Schaffer function, the con-


Sphere
Table 10

Ackley

Branin

vergence speed of qABC(r = 0.5) is considerably better than the


performance of qABC(r = 0) while there is no significant difference
D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238 233

4
x 10
12
qABC(r=0)
qABC(r=0.5)
qABC(r=1)
10 qABC(r=2)
qABC(r=3)

Objective Function Value


qABC(r=infinite)

0
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
Evaluations

Fig. 1. qABC algorithms’ convergence performance on Sphere function.

8
x 10
5
qABC(r=0)
4.5 qABC(r=0.5)
qABC(r=1)
qABC(r=2)
4
Objective Function Value

qABC(r=3)
qABC(r=infinite)
3.5

2.5

1.5

0.5

0
0 500 1000 1500 2000 2500 3000
Evaluations

Fig. 2. qABC algorithms’ convergence performance on Rosenbrock function.

between these two r values on Rosenbrock and Rastrigin functions. reminded that, when r = 0, Eq. (5) becomes equal to Eq. (2) and qABC
Since qABC converges to the optimal values in very early evalua- works like the standard ABC. Generally, when r ≥ 1, the standard
tions on Branin and SixHumpCamelBack functions for all r values, ABC requires at least two times more function evaluations than
comparing the speed of the convergence is meaningless through qABC to reach the same mean value. When the convergence graph-
the optimization process. Actually, it can be said that qABC has a ics and the results in the tables are evaluated, it can be generalised
very successful convergence performance on these two 2 dimen- that a value around 1 for r is an appropriate value for qABC algo-
sional test problems for all considered r values. rithm. So, in the comparison of qABC with the state of algorithms
These graphics show that the parameter r is one of the main (GA, PSO, DE, ABC), the results of qABC having the parameter r = 1
factors for the convergence speed of qABC algorithm. It should be was used. This comparison results are given in Table 10. In the

600
qABC(r=0)
qABC(r=0.5)
qABC(r=1)
500 qABC(r=2)
qABC(r=3)
Objective Function Value

qABC(r=infinite)

400

300

200

100

0
0 0.5 1 1.5 2 2.5 4
Evaluations x 10

Fig. 3. qABC algorithms’ convergence performance on Rastrigin function.


234 D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238

1000
qABC(r=0)
qABC(r=0.5)
900 qABC(r=1)
qABC(r=2)
800 qABC(r=3)

Objective Function Value


qABC(r=infinite)
700

600

500

400

300

200

100

0
0 1000 2000 3000 4000 5000
Evaluations

Fig. 4. qABC algorithms’ convergence performance on Griewank function.

0.7
qABC(r=0)
qABC(r=0.5)
0.6 qABC(r=1)
qABC(r=2)
Objective Function Value

qABC(r=3)
qABC(r=infinite)
0.5

0.4

0.3

0.2

0.1

0
0 200 400 600 800 1000 1200
Evaluations

Fig. 5. qABC algorithms’ convergence performance on Schaffer function.

table, the mean of 30 independent runs and the standard devia- smaller objective function values than other compared algorithms,
tions are presented for considered problems. For a fair comparison and the best mean and SD values belong to the qABC algorithm.
the table values below E−12 are accepted as 0 as in [31]. qABC and ABC algorithms find optimum results for Rastrigin and
When Table 10 is examined, it can be seen that GA has the Griewank functions while other algorithms do not. Performance
worst performance among the considered algorithms for all prob- of the PSO and DE on Griewank is better than that on Rastrigin
lems except Schwefel, SixHumpCamelBack and Branin. It has the function. However GA’s performance does not show a remarkable
best performance on SixHumpCamelBack and Branin functions. difference on these test functions. PSO and DE algorithms achieve
However, there is not a really remarkable difference between the the optimum results in given number of function evaluations,
performance of the algorithms, we can say that all algorithms while ABC, qABC and GA orderly converge to the optimum value
perform well on SixHumpCamelBack and Branin test functions. On with some errors on Schaffer function. qABC and ABC algorithms
Rosenbrock function, the standard ABC and qABC algorithms find present excellent performance, however other algorithms do not

6
x 10
4
qABC(r=0)
qABC(r=0.5)
3.5 qABC(r=1)
qABC(r=2)
qABC(r=3)
Objective Function Value

3 qABC(r=infinite)

2.5

1.5

0.5

0
0 500 1000 1500 2000 2500 3000
Evaluations

Fig. 6. qABC algorithms’ convergence performance on Dixon-Price function.


D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238 235

25
qABC(r=0)
qABC(r=0.5)
qABC(r=1)
qABC(r=2)
20 qABC(r=3)

Objective Function Value


qABC(r=infinite)

15

10

0
0 0.5 1 1.5 2 2.5
4
Evaluations x 10

Fig. 7. qABC algorithms’ convergence performance on Ackley function.

2000
qABC(r=0)
qABC(r=0.5)
0 qABC(r=1)
qABC(r=2)
qABC(r=3)
Objective Function Value

qABC(r=infinite)
−2000

−4000

−6000

−8000

−10000

−12000

0 1 2 3 4 5 6 7 8 4
x 10
Evaluations

Fig. 8. qABC algorithms’ convergence performance on Schwefel function.

provide so good results on Schwefel and Dixon-Price functions. mean values for six of ten problems among the compared opti-
Both ABC algorithms give the same mean value which is very close mization algorithms. So, in order to compare the performance of
to optimum for Schwefel function and ABC hits the optimum mean the qABC and ABC, the Wilcoxon signed rank test was used in this
value of Dixon-Price while qABC does not. For Ackley problem, all paper. Wilcoxon test is a nonparametric statistical test that can be
of the algorithms find the optimum results except GA and PSO. used for the analyzing the behaviour of evolutionary algorithms
PSO’s result is closer to 0 than the result of the GA. [33]. The test results are shown in Table 11. The first column of
The table clearly shows that ABC and qABC algorithms clearly the table presents the test functions, and the second column gives
outperform GA, PSO and DE in these conditions on the consid- the mean difference between the results of ABC and qABC. The
ered test problems. However, it is not very clear that there is a last column gives the p value that is an important determiner of
significant difference between the performances of the two ABC the test. Since the mean difference column value is 0 for six text
algorithms which produce very similar results and present the best functions, there are four test problems that the significance of the

1400
qABC(r=0)
qABC(r=0.5)
1200 qABC(r=1)
qABC(r=2)
Objective Function Value

qABC(r=3)
1000 qABC(r=infinite)

800

600

400

200

−200
0 10 20 30 40 50 60 70 80 90 100
Evaluations

Fig. 9. qABC algorithms’ convergence performance on SixHumpCamelBack function.


236 D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238

70
qABC(r=0)
qABC(r=0.5)
qABC(r=1)
60 qABC(r=2)

Objective Function Value


qABC(r=3)
qABC(r=infinite)
50

40

30

20

10

0
0 10 20 30 40 50 60 70 80 90 100
Evaluations

Fig. 10. qABC algorithms’ convergence performance on Branin function.

difference between the performances of the algorithms can be dis- bees. However, it should be noticed that the increment rate on the
cussed. Among the four test problem, the p value is different from complete computing time of qABC is lower than the increment
0 for only Rosenbrock. So, only for Rosenbrock problem there is rate of the dimension, like ABC algorithm. So, it can be indicated
not enough evidence to reject the null hypothesis (0.711 > 0.05). that there is not a strict dependence between the dimension of the
These tests show that in these conditions, the performance of ABC problem and the complexities of qABC and ABC.
algorithm is significantly better than qABC for other three test
functions (Schaffer, Dixon-Price and Branin). It should be empha- 4.2. Experiments on “colony size”
sised that these tests are based on the final results obtained by the
algorithms. For the experiments in this section, four test functions from
Generally, we could interpret the simulation and test results Table 1 were selected. Each function has different character. These
as, when Eq. (5) is used for onlookers to produce new solutions benchmarks are: Sphere, Rosenbrock, Rastrigin and Griewank func-
with r ≥ 1, the local convergence performance of ABC is signifi- tions. In the experiments, r was set as 1. The same parameter setting
cantly improved especially in the early cycles of the optimization was used with the previous experiments (maximum evaluation
process. So, a good tuning of the parameter r promises a superior number = 500,000).The limit value was calculated by using Eq. (9)
convergence performance for qABC algorithm. as indicated before. The qABC algorithm was tested on the men-
tioned functions for the several different colony size (CS) values:
4.1. Time complexity of ABC algorithms 4, 6, 12, 24, 50, 100, 200 and the results of the experiments are
presented in Table 13.
In this section, a time complexity analysing process is carried The table shows that qABC algorithm gives the optimal results in
out for ABC and qABC algorithms on Rosenbrock function. In order all fields without being influenced by changing the CS for the values
to present the time complexity’s relationship with the dimension CS > 6 and CS > 12 on Sphere and Rastrigin functions, respectively.
of the problem, the complexities are calculated for the dimen- The optimal values are also found by the algorithm on Griewank
sions: 10, 30 and 50 as described in [32]. The results are shown function with CS = 50 and CS = 200. Although the algorithm finds
in Table 12 for ABC and qABC algorithms. These analyses were per- the results very close to the optimum for CS = 100, when CS < 50,
formed on Windows 7 Professional (SP1) on a Intel(R) Core(TM) i7 there is not efficient convergence to the optimum for Griewank
M640 2.80 GHz processor with 8 GB RAM and the algorithms were test function. Considering the standard deviations, the mean values
coded by using C Sharp programming language and.net framework are very similar for Rosenbrock problem for the intervals of the CS
3.5 was used. values 24–200 and 6–12. However, when CS = 4, the results of qABC
The code execution time of this system was obtained and significantly get worse on Rosenbrock function.
demonstrated in the table as T0 . Also, the computing time for Rosen-
brock function for 200,000 function evaluations is presented as T1 in 4.3. Experiments on “limit”
the table. Each of the algorithms was run 5 times for 200,000 func-
tion evaluations and the average computing time of the algorithms The same test functions in the previous section were used to
are presented as T̂2 . The algorithm complexities were calculated by test the qABC algorithm for different limit values (10, 50, 187,
(T̂2 − T1 )/T0 . The time complexity of the qABC is higher than ABC 375, 750, 1500) to observe the relation between the parameter
algorithm since there is an additional part in the phase of onlooker limit and the performance of the algorithm for this algorithm. The

Table 12
Time complexities of the ABC and qABC algorithms on Rosenbrock function.

D T0 T1 T̂2 of ABC T̂2 of qABC Complexity of Complexity of


ABC((T̂2 − T1 )/T0 ) qABC((T̂2 − T1 )/T0 )

10 0.088005 0.4280245 0.58063322 2.85596336 1.73409147207545 27.5886467814329


30 0.088005 1.3590778 1.66389516 8.28667398 3.46363683881598 78.7182112379978
50 0.088005 2.3511345 2.72615592 13.7839884 4.26136492244759 129.911412987898
Table 13
Effect of the colony size (CS) on the performance of qABC algorithm.

CS Sphere Rosenbrock Rastrigin Griewank

Mean SD Mean SD Mean SD Mean SD

4 0.0295194 0.0564562 80.0869673 46.9672644 5.9836256 1.8967871 0.1236460 0.0935611


6 2.6999635E−15 2.3764678E−15 0.2882164 0.3470707 0.25516277 0.4203245 2.4260047E−05 7.2695534E−05
12 0 0 0.3033491 0.5071798 3.7133555E−12 6.8609101E−12 4.1002980E−10 1.6459408E−09
24 0 0 0.1086825 0.1165915 0 0 1.5733340E−13 7.7332947E−13
50 0 0 0.1329198 0.1799131 0 0 0 0

D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238


100 0 0 0.0618102 0.1052786 0 0 2.1353290E−15 1.1087109E−14
200 0 0 0.1350616 0.1581752 0 0 0 0

Table 14
Effect of the limit on the performance of qABC algorithm.

Limit Sphere Rosenbrock Rastrigin Griewank

Mean SD Mean SD Mean SD Mean SD

10 7.4129058 2.9555841 654.3308640 181.7699715 26.6525432 3.6289378 1.0672783 0.0355146


50 6.2867716E−09 4.7047270E−09 0.6935193 0.4453130 1.0218641E−05 1.5189265E−05 3.6988779E−06 5.1940736E−06
187 1.1402053E−15 0 0.1007734 0.1877014 6.8093679E−15 6.9419476E−15 2.5103216E−12 1.1533350E−11
375 0 0 0.1084152 0.1497979 0 0 1.8577732E−14 8.8020357E−14
750 0 0 0.1329198 0.1799131 0 0 0 0
1500 0 0 0.1919979 0.3661197 0 0 0 0

237
238 D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238

same parameter setting used in the previous experiments (colony [8] M. Subotic, M. Tuba, N. Stanarevic, Different approaches in parallelization of
size = 50 and the maximum evaluation number = 500,000) were the artificial bee colony algorithm, Int. J. Math. Model. Method Appl. Sci. 5 (4)
(2011) 755–762.
used. In terms of the mean and standard deviation of 30 inde- [9] G. Zhu, S. Kwong, Gbest-guided artificial bee colony algorithm
pendent runs, the simulation results are given in Table 14. On for numerical function optimization, Appl. Math. Comput. (2010),
Griewank, Sphere and Rastrigin functions, the results get better http://dx.doi.org/10.1016/j.amc.2010.08.049.
[10] X. Xu, X. Lei, Multiple sequence alignment based on ABC SA, in: Proceedings
as the limit values increase. However, the algorithm achieves the of the Artificial Intelligence and Computational Intelligence, Lecture Notes in
optimum results when the limit value is l ≥ 750 for the Griewank Computer Science, vol. 6320, 2010, pp. 98–105.
function and l ≥ 375 for the Sphere and Rastrigin functions. When [11] M. Tuba, N. Bacanin, N. Stanarevic, Guided artificial bee colony algorithm,
in: Proceedings of the European Computing Conference (ECC11), 2011, pp.
the standard deviation is considered, the difference between the
398–403.
mean objective function values produced for different limit values [12] G. Li, P. Niu, X. Xiao, Development and investigation of efficient artificial
looks very small on Rosenbrock function except the smallest limit bee colony algorithm for numerical function optimization, Appl. Soft Comput.
(2011), http://dx.doi.org/10.1016/j.asoc.2011.08.040.
value, 10.
[13] A. Banharnsakun, B. Sirinaovakul, T. Achalakul, Job shop scheduling with the
The results of these experiments showed that, 750 which is best-so-far abc, Eng. Appl. Artif. Intell. 25 (3) (2012) 583–593.
equal to the value calculated by Eq. (9) is a suitable value as the [14] X. Bi, Y. Wang, An improved artificial bee colony algorithm, in: Proceedings
limit parameter. of the 3rd International Conference on Computer Research and Development
(ICCRD), vol. 2, 2011, pp. 174–177.
[15] W.F. Gao, S.Y. Liu, A modified artificial bee colony algorithm, Comput. Oper.
5. Conclusion Res. 39 (3) (2012) 687–697.
[16] E. Mezura-Montes, O. Cetina-Dominguez, Empirical analysis of a modified arti-
ficial bee colony for constrained numerical optimization, Appl. Math. Comput.
In this paper a new definition for the behaviour of the onlooker 218 (22) (2012) 10943–10973.
bees of ABC algorithm was presented and a novel version of [17] N. Bacanin, M. Tuba, Artificial bee colony (ABC) algorithm for constrained opti-
ABC called quick ABC (qABC) was described. Experimental studies mization improved with genetic operators, Stud. Inf. Control 21 (2) (2012)
137–146.
showed that the new definition significantly improves the conver- [18] W.F. Gao, S.Y. Liu, F. Jiang, An improved artificial bee colony algorithm for direc-
gence performance of the standard ABC when the neighbourhood ting orbits of chaotic systems, Appl. Math. Comput. 218 (7) (2011) 3868–3879.
radius r is set appropriately. [19] W.F. Gao, S.Y. Liu, L.L. Huang, A global best artificial bee colony algorithm for
global optimization, J. Comput. Appl. Math. 236 (11) (2012) 2741–2753.
The performance of the qABC algorithm was compared with the [20] Y. Liu, X.X. Ling, Y. Liang, G.H. Liu, Improved artificial bee colony algorithm with
standard ABC and the state of art algorithms. The results showed mutual learning, J. Syst. Eng. Electron. 23 (2) (2012) 265–275.
that, qABC algorithm presents promising results for considered [21] F. Kang, J. Li, Q. Xu, Hybrid simplex artificial bee colony algorithm and its
application in material dynamic parameter back analysis of concrete dams,
problems. In order to analyse the effect of the parameters limit and
J. Hydraul. Eng. 40 (6) (2009) 736–742.
colony size on the performance of the qABC algorithm, some exper- [22] F. Kang, J. Li, Q. Xu, Structural inverse analysis by hybrid simplex artificial bee
iments were also conducted. Moreover, time complexity analyses colony algorithms, Comput. Struct. 87 (13–14) (2009) 861–870.
[23] Y. Marinakis, M. Marinaki, N. Matsatsinis, A hybrid discrete artificial bee colony
were carried out for ABC and qABC algorithms.
grasp algorithm for clustering, in: Proceedings of the International Confer-
In the future, the adaptation of the parameter r can be studied ence on Computers and Industrial Engineering (CIE: 2009), vols. 1–3, 2009,
to improve the performance of qABC. It is also noted that qABC pp. 548–553.
algorithm can be used for all type optimization problems, such as [24] R. Xiao, T. Chen, Enhancing ABC optimization with Ai-net algorithm for solving
project scheduling problem, in: Proceedings of the 7th International Confer-
binary, combinatorial, integer optimization problems. ence on Natural Computation (ICNC), vol. 3, 2011, pp. 1284–1288.
[25] W. Bin, C.H. Qian, Differential artificial bee colony algorithm for global numer-
References ical optimization, J. Comput. 6 (5) (2011) 841–848.
[26] T.K. Sharma, M. Pant, Differential operators embedded artificial bee colony
algorithm, Int. J. Appl. Evol. Comput. 2 (3) (2011) 1–14.
[1] D. Karaboga, Artificial bee colony algorithm, Scholarpedia 5 (3) (2010) 6915 [27] F. Kang, J. Li, Z. Ma, H. Li, Artificial bee colony algorithm with local search for
www.scholarpedia.org/article/Artificial bee colony algorithm numerical optimization, J. Softw. 6 (3) (2011) 490–497.
[2] D. Karaboga, An Idea Based on Honey Bee Swarm for Numerical Optimiza- [28] T.J. Hsieh, H.F. Hsiao, W.C. Yeh, Mining financial distress trend data using
tion, Technical Report-TR06, Erciyes University, Engineering Faculty, Computer penalty guided support vector machines based on hybrid of particle swarm
Engineering Department, 2005. optimization and artificial bee colony algorithm, Neurocomputing 82 (2012)
[3] D. Karaboga, B. Gorkemli, C. Ozturk, N. Karaboga, A comprehensive survey: 196–206.
artificial bee colony (ABC) algorithm and applications, Artif. Intell. Rev. (2012), [29] A. Abraham, R.K. Jatoth, A. Rajasekhar, Hybrid differential artificial bee colony
http://dx.doi.org/10.1007/s10462-012-9328-0. algorithm, J. Comput. Theor. Nanosci. 9 (2) (2012) 249–257.
[4] P.W. Tsai, J.S. Pan, B.Y. Liao, S.C. Chu, Enhanced artificial bee colony [30] D. Karaboga, B. Gorkemli, A quick artificial bee colony – qABC – algorithm
optimization, Int. J. Innov. Comput. Inf. Control 5 (12) (2009) for optimization problems, in: Proceedings of the 2012 International Sympo-
5081–5092. sium on Innovations in Intelligent Systems and Applications (INISTA), 2–4 July,
[5] H. Narasimhan, Parallel artificial bee colony (PABC) algorithm, in: Proceedings Turkey, 2012, http://dx.doi.org/10.1109/INISTA.2012.6247010.
of the World Congress on Nature Biologically Inspired Computing (NaBIC 2009), [31] D. Karaboga, B. Akay, A comparative study of artificial bee colony algorithm,
2009, pp. 306–311. Appl. Math. Comput. 214 (1) (2009) 108–132.
[6] M. Subotic, M. Tuba, N. Stanarevic, Parallelization of the artificial bee colony [32] P.N. Suganthan, N. Hansen, J.J. Liang, K. Deb, Y.P. Chen, A. Auger, S. Tiwari, Prob-
(ABC) algorithm, in: Proceedings of the 11th WSEAS International Conference lem Definitions and Evaluation Criteria for the CEC 2005 Special Session on
on Neural Networks and 11th WSEAS International Conference on Evolution- Real-parameter Optimization, Technical Report, Nanyang Technological Uni-
ary Computing and 11th WSEAS International Conference on Fuzzy Systems, versity, Singapore, 2005.
World Scientific and Engineering Academy and Society (WSEAS), Stevens Point, [33] S. Garcia, D. Molina, M. Lozano, F. Herrera, A study on the use of non-parametric
Wisconsin, USA, 2010, pp. 191–196. tests for analyzing the evolutionary algorithms behaviour: a case study on the
[7] W. Zou, Y. Zhu, H. Chen, X. Sui, A clustering approach using coop- CEC2005 special session on real parameter optimization, J. Heuristics 15 (6)
erative artificial bee colony algorithm, Discret. Dyn. Nat. Soc. (2010), (2009) 617–644.
http://dx.doi.org/10.1155/2010/459796.

You might also like