Professional Documents
Culture Documents
PII: S0020-0255(17)30274-8
DOI: 10.1016/j.ins.2017.08.067
Reference: INS 13072
Please cite this article as: Yiwen Zhong, Juan Lin, Lijin Wang, Hui Zhang, Hybrid Discrete Artificial Bee
Colony Algorithm with Threshold Acceptance Criterion for Traveling Salesman Problem, Information
Sciences (2017), doi: 10.1016/j.ins.2017.08.067
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service
to our customers we are providing this early version of the manuscript. The manuscript will undergo
copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please
note that during the production process errors may be discovered which could affect the content, and
all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT
T
a College of Computer and Information Science, Fujian Agriculture and Forestry University,
IP
Fuzhou, 350002 China
b Department of Computer Engineering and Computer Science, University of Louisville,
CR
Abstract
US
Artificial bee colony (ABC) algorithm, which has explicit strategies to balance
intensification and diversification, is a smart swarm intelligence algorithm and
AN
was first proposed for continuous optimization problems. In this paper, a hybrid
discrete ABC algorithm, which uses acceptance criterion of threshold accepting
method, is proposed for Traveling Salesman Problem (TSP). A new solution
M
updating equation, which can learn both from other bees and from features of
problem at hand, is designed for the TSP. Aiming to enhance its ability to escape
ED
from premature convergence, employed bees and onlooker bees use threshold ac-
ceptance criterion to decide whether or not to accept newly produced solutions.
Systematic experiments were performed to show the advantage of the new solu-
PT
∗ Correspondingauthor
Email address: yiwzhong@fafu.edu.cn (Yiwen Zhong)
1. Introduction
T
tralized and self-organized swarms. Two fundamental concepts considered as
IP
necessary properties of SI are self-organization and division of labor[1]. Honey
bee swarms, which clearly have strong self-organization and division of labor
CR
5
10
US
to solve real world problems[15]. ABC algorithm, which simulates foraging be-
havior of honey bees, was invented by Karaboga [14] and was first described
for solving numerical optimization problems[5]. Although numerical problems
AN
are still active research fields for ABC, more and more variants have been in-
troduced for discrete and combinatorial problems, such as 0-1 knapsack[27],
multiple sequence alignment[39], wireless sensor network routing[16], symbolic
M
binatorial optimization problem. The purpose of the TSP is to find the short-
est tour visiting each city once and only once. As an NP-complete problem,
PT
tion and logistics, finding suboptimal solutions with a reasonable cost may be
more advantageous, and therefore a lot of interests have been focused on using
AC
2
ACCEPTED MANUSCRIPT
buffalo optimization [23], simulated annealing (SA) algorithms [10, 36], and
30 some hybrid algorithms [11, 22] etc.
ABC algorithm is an outstanding SI algorithm for continuous optimization
problems. There are several features contributing to its success. The solution
T
updating equation, which cooperates with other bees and updates one variable
IP
each time, can search solution space intelligently in a much fine-grained manner.
35 The greedy acceptance strategy and the onlooker can enhance its intensification
CR
ability, and the scout can enlarge its diversification. Working together, those
strategies can produce good balance between intensification and diversification
for continuous optimization problems. Although the basic ABC algorithm is
40 US
simple and easy to implement, applying ABC algorithm to combinatorial prob-
lems is not a simple task. We must carefully redesign the solution updating
equation for problems at hand, so the solution updating equation can retain its
AN
good features. Next we must recheck the integrated effect of greedy acceptance
strategy, onlooker, and scout, so a good balance between intensification and
diversification can be obtained in the search process.
M
45 Aiming to study the basic principles of how to extend ABC algorithm for
discrete optimization problems, this paper presents a hybrid discrete ABC (HD-
ED
ABC) algorithm for the TSP. In HDABC algorithm, a novel solution updating
equation, which can learn both from other bees and from features of the TSP, is
designed to produce new solution for employed bees and onlooker bees. Because
PT
50 the scout bee alone is not effective enough for HDABC algorithm to keep suf-
ficient diversification for the TSP, greedy acceptance strategy is replaced with
CE
3
ACCEPTED MANUSCRIPT
T
2. Related Works
IP
This section introduces basic ABC algorithm, the TSP, and meta-heuristics
65 for the TSP. In section 2.1, the principles and pseudocode of basic ABC al-
CR
gorithm are described in detail. Section 2.2 introduces the TSP and its goal.
Section 2.3 gives a simple survey of state-of-the-art meta-heuristics for the TSP
and analyzes the main advantages and disadvantages of current ABC algorithms
for the TSP.
ABC algorithm was first proposed by Karaboga in 2005[14] for solving nu-
merical optimization problems. Since ABC algorithm is simple to understand,
M
easy to implement, and has few control parameters, it has been widely used in
many fields. In ABC algorithm, artificial honey bees consist of employed bees,
onlooker bees and scout bees, among which the number of employed bees and
ED
onlooker bees are equal. A bee waiting on the dance area for making decision to
choose a food source, is called an onlooker bee; a bee going to the food source
visited by itself previously, is named an employed bee. A bee carrying out ran-
PT
dom search, is called a scout. The process of bees looking for food source is the
process of finding the optimum solution, each solution of optimization problem
CE
is considered as a food source position in the search space, the fitness of solution
represents the profitability of food source. Solution numbers N is equal to the
number of employed bees or the onlooker bees. First of all, the ABC algorithm
AC
4
ACCEPTED MANUSCRIPT
T
where k = 1, 2, · · · , D, j 6= i and j ∈ {1, 2, · · · , N } is an randomly chosen
index, and d is a randomly selected dimension from {1, 2, · · · , D}. Moreover,
IP
r is a random number between [-1, 1]. The solution updating equation has
CR
two main features, one is that it updates one dimension only, and the other is
75 that it learns from another randomly selected bee xj . One dimension updating
guarantees that bees can search the solution space in a much find-grained way.
US
Learning from other bee, which is achieved by using difference between itself and
another bee to update solution, not only can produce an intelligent and adaptive
neighborhood structure, but also is the main origin of collective behavior of
AN
80 swarm.
After all the employed bees have completed search process, they share the
nectar information of the food sources with the onlooker bees on the dance area.
M
An onlooker bee evaluates the nectar information taken from all employed bees,
and then it chooses a food source by using a selection probability. The higher
ED
85 the solution’s fitness is, the higher its selection probability is. The selection
probability is described in Eq. 2, where f it(i) is the fitness of ith solution and
N is the number of solution. After selecting the food source, onlookers start to
PT
carry out the exploitation process using Eq. 1 like employed bees.
f it(i)
pi = PN (2)
CE
j=1 f it(j)
5
ACCEPTED MANUSCRIPT
where xj,min is the lower bound of dimension j and xj,max is the upper bound
90 of dimension j.
The ABC algorithm’s pseudocode is listed in algorithm 1 and algorithm 2.
Algorithm 1 is the basic framework of ABC algorithm, and algorithm 2 is used
T
by employed bee and onlooker bee to update solution. The parameter t of
IP
algorithm 2 represents how many times a bee will use solution updating equation
95 to produce new solutions in each generation. For basic ABC algorithm, the
CR
parameter t is set to 1.
US
TSP is one of the most famous hard combinatorial optimization problems.
It belongs to the class of NP-complete optimization problems. This means that
no known polynomial time algorithms can guarantee it find its global optimal
AN
solution. Consider a salesman who has to visit n cities. The object of TSP is to
find a shortest tour through all the cities such that no city is visited twice and
the salesman returns to the starting city at the end of the tour. It can be defined
M
as follows. For a n cites problem, we can use a distance matrix D = (di,j )n∗n
to store the distances between all the pairs of cites,where each element di,j of
ED
matrix D represents the distance between city i and j. We can use a linked list
to represent a solution, and use a vector x to implement the linked list. Each
element xi in x represents an edge from city i to city xi . The goal is to find a
PT
In recent years, many meta-heuristics have been proposed for the TSP.
Ismkhan [13] proposed a new ant colony optimization (ACO) algorithm with
6
ACCEPTED MANUSCRIPT
T
//Initialization Phase
IP
1: Setup the parameters, such as limit and parameter t in Algorithm 2.
2: Produce the initial solutions using Eq. 3.
CR
3: Calculate fitness of the solutions.
4: while termination condition is not met do
//Employed Bee Phase
5:
6:
for each Employed Bee do
Search new solution(t)
US
AN
7: end for
//Onlooker Bee Phase
8: Calculate selection probabilities of the employed bees using Eq. 2.
M
7
ACCEPTED MANUSCRIPT
T
4: if the fitness of candidate solution is better than current solution x then
5: x = y.
IP
6: end if
CR
7: if solution has been improved then
8: Reset limit to 0.
9: else
10:
11:
12:
Increase its limit by 1.
end if US
until the statements repeat t times
AN
three effective strategies. The three strategies include pheromone representa-
105 tion with linear space complexity, new next city selection, and pheromone aug-
M
mented 2-opt local search. Yong [41] proposed a hybrid Max-Min ant system
which is integrated with a four vertices and three lines inequality. The four
ED
vertices and three lines inequality is used as a local search strategy for each
ant. Escario et al. [7] proposed an ant colony extended algorithm which in-
110 cludes self-organization property. This self-organization property is based on
PT
task division and an emergent task distribution based on the feedback provided
by the results of ants’ searches. Wang et al. [37] proposed multi-offspring GA
where crossover operator and mutation operator generate more offspring than
CE
8
ACCEPTED MANUSCRIPT
CS algorithm which uses learning operator, ‘A’ operator and 3-opt to acceler-
ate the convergence rate. Osaba et al. [24] proposed an improved discrete BA
(IDBA) which uses hamming distance to measure the distance between bats,
and 2-opt and 3-opt operators are used to improve solutions. Saji et al. [30]
T
125 proposed a novel discrete BA where two-exchange crossover operator is used to
IP
update solutions and 2-opt operator is used to improve solutions. Saraei et al.
[31] proposed a firefly algorithm which uses greedy swap to improve solutions.
CR
Xu et al. [40] proposed an immune algorithm combined with estimation of dis-
tribution algorithm (IA-EDA). In IA-EDA, a heuristic refinement local search
130 operator is proposed to repair the infeasible solutions. Zhou et al. [42] proposed
US
a discrete invasive weed optimization which uses 3-opt local search operator and
an improved complete 2-opt to generate new solution. Odili et al. [23] proposed
African buffalo algorithm for the TSP. Geng et al. [10] proposed an adaptive
AN
SA algorithm with greedy search (ASA-GS), where greedy search technique is
135 used to speed up the convergence rate. Wang et al. [36] proposed a multi-agent
SA with instance-based sampling (MSA-IBS) where instance-based sampling is
M
used to improve the efficiency of sampling. Mahi et al. [22] presented a hybrid
method, which used PSO algorithm, ACO algorithm, and 3-Opt heuristic. The
ED
PSO algorithm is used for detecting optimum values of parameters used for city
140 selection operations in the ACO algorithm. The 3-Opt algorithm is used to
further improve the best solution produced by the ACO algorithm.
PT
Several versions of ABC algorithm have been proposed for solving the TSP.
Kocer et al. [19] proposed an improved ABC algorithm which uses a loyalty func-
CE
tion and a threshold to decide whether a bee is employed bee or onlooker bee,
145 and employed bee will be enriched by 2-opt local search algorithm. Shokouhifar
et al. [33] presented a hybrid ABC algorithm where swap, inverse, relocation,
AC
and Or-opt are used to produce new solution and the result of ABC algorithm
is further enriched by SA algorithm. Akay et al. [2] proposed a 2-opt based
ABC algorithm where neighbor-based 2-opt move and 2-opt algorithm are used
150 to produce new solutions. Kiran et al. [18] proposed a discrete ABC (DABC)
where nine neighborhood operators are used as solution updating equations of
9
ACCEPTED MANUSCRIPT
basic ABC, and 2-opt or 3-opt heuristic approaches are used to enrich the results
obtained by DABC. Gndz et al. [11] proposed a hyhrid method (ACO-ABC)
where ACO is used to provide a better initial solution for the ABC. Sabet et
155 al. [29] proposed a hybrid mutation-based ABC algorithm where swap, insert,
T
and biased insert operators are used to produce new solutions. Li et al. [21]
IP
presented two efficient ABC algorithms with heuristic swap operators, where
nearest neighbor list is used to constrain the number of available cities. Shi and
CR
Jia [32] proposed a hybrid ABC algorithm which uses crossover and mutation
160 operator to produce new solution, and Metropolis acceptance criterion is used
to select a candidate solution from those two solutions produced by crossover
and mutation operators.
US
After analyzing the solution updating schemes used in variant ABC algo-
rithms for the TSP [19, 33, 18, 29, 21], we have found that in most of them,
AN
165 bees update solutions independently. It means those variants lose one of the
important origins of collective behavior for SI algorithms. Akay et al. [2] does
use neighbor-based 2-opt move which uses other bees to guide the solution up-
M
dating, however 2-opt algorithm, which is used to produce new solution in the
case the solution produced by neighbor-based 2-opt move is worse than current
ED
170 solution, is both time consuming and inefficient in this context. Furthermore,
the integration of greedy acceptance strategy and 2-opt algorithm, which uses
greedy strategy also, may deteriorate ABC’s diversification also. The crossover
PT
operator used in [32] can learn from other bees also, but the crossover operator
will change a large part of the solution, so it is not good at searching solution
CE
175 space finely. Shi and Jia [32] try to use the Metropolis acceptance criterion of SA
algorithm to balance intensification and diversification. But they use it to select
solution from those produced by crossover operator and those produced by mu-
AC
tation operator; it will dramatically weaken its effect because the selected result
still must compete with the current solution using greedy acceptance strategy.
180 Aiming to tackle those shortages, we propose the HDABC algorithm, which can
not only retain the collaborative learning ability and fine-grained search ability
of classic ABC algorithm, but can also get better balance between intensification
10
ACCEPTED MANUSCRIPT
This section presents the main ideas of the HDABC algorithm and the used
T
185
strategies. Section 3.1 explains the solution updating equation and its concrete
IP
implementation. Section 3.2 introduces the four selection operators used by
onlooker bees in detail. Section 3.3 describes the threshold acceptance criterion
CR
used by the HDABC algorithm. finally, section 3.4 gives the detail description
190 of pseudo codes for the HDABC algorithm.
the linked list. Each element xi in x represents an edge ei,xi from city i to city
xi . We define the minus operator of two elements xi and yi from solution x and
ED
y as follows:
ei,yi if yi 6= xi
yi − xi = (5)
PT
x = x + (yi − xi ) (6)
AC
200 where y is another solution randomly selected from swarm, and the plus operator
means to add edge, which is represented by the second operand, to the first
operand under the constrain that the validness of the first operand must be
retained. The philosophy behind this solution updating equation is that, in
case that solution x and y have a different next visiting city from city i, then
11
ACCEPTED MANUSCRIPT
205 solution x will try to learn from solution y by adding the next visiting city of
solution y into its solution. On the other hand, in case that x and y have same
next visiting city from city i, solution x will try to learn from problem at hand
by selecting a new city from the nearest city list of city i. Learning from other
T
bees can guarantee that HDABC algorithm has collective behavior, and learning
IP
210 from problem can improve its efficiency and speed up its convergence. Now the
problem is how to add a selected edge into a solution for the TSP.
CR
Wang et al. [36] systematically studied several adding strategies, such as
inverse, insert, swap, and their combination. Their study shows that greedy
hybrid operator has best performance. This is a kind of operator with multiple
215
US
neighbors, which selects the best one from those neighbors. Specifically, after an
edge ei,j is selected to be added into current solution, it uses inverse operator,
insert operator and swap operator to produce three neighbor solutions. And the
AN
best one is used as the candidate solution. The insert operator in [36] is a kind
of point insert operator, only the city j is moved to the back of city i. HDABC
220 uses a similar greedy hybrid operator where the insert operator is replaced with
M
block insert operator. In block insert operator, a block of city which is leaded by
j is moved to the back of i. Point insert operator is a special case of block insert
ED
operator where the size of block is 1. The length of block is randomly produced
for each insert operation. The block insert operator is described in Fig. 1, where
225 the three dotted lines represent the three replaced (replacing) edges.
PT
CE
AC
12
ACCEPTED MANUSCRIPT
improve. It means those better employed bees will have a higher probability to
230 be selected. There may be several selection strategies, such as roulette wheel
selection, tournament selection, rank selection, and disruptive selection etc. Bao
and Zeng [4] compared the above four selection schemes in ABC algorithm for
T
continuous problem and they drew the conclusion that ABC algorithm with dis-
IP
ruptive selection, tournament selection, or rank selection performed better than
235 basic ABC algorithm with roulette wheel selection. In order to analyze the rela-
CR
tionship among performance, selection pressure and computation resource, this
paper will systematically compare those four selection schemes with different
computation resource in Sec. 4.2.
1) Roulette Wheel Selection
US
The probability pi , which is used to select ith employed bee, is calculated
by Eq. 2, where f it(i) is the fitness of ith employed bee and is computed as
AN
follows:
1
f it(i) = (7)
f (i)
M
240 where f (i) is the tour length of the solution of ith employed bee.
2) Tournament Selection
Tournament selection works as follows: Onlooker bee chooses some number
ED
n of employed bees randomly from the population and uses the best one. In
this paper, we use binary tournament where n is equal to 2.
3) Rank Selection
PT
245
sorted list and not on the actual objective value. Rank-based fitness assignment
overcomes the scaling problems of the proportional fitness assignment. Rank
selection introduces a uniform scaling across the population and provides a
AC
simple and effective way of controlling selection pressure. Rank selection may
use linear ranking or nonlinear ranking. In this paper, we will only study linear
ranking. Consider N the number of bees in the population, i the position of
employed bee in this population (the worst bee has i=1, the best bee i = N )
13
ACCEPTED MANUSCRIPT
and sp the selection pressure. In linear ranking, the fitness value for a bee is
calculated as:
(i − 1)
f it(i) = 2 − sp + 2 ∗ (sp − 1) ∗ (8)
(N − 1)
where the selection pressure sp is in [1.0, 2.0]. After the fitness f it(i) has
T
been calculated, Eq. 2 is used to calculate the selection probability pi for each
IP
employed bee.
4) Disruptive Selection
CR
Disruptive selection is a type of natural selection that selects against the
average individual in a population. It inclines to select both extremes, but little
in the middle. In disruptive selection, the fitness value for a bee is calculated
as:
US
f it(i) = f (i) − f¯ (9)
AN
250 where f¯ is the average value of objective function f . After the fitness f it(i) has
been calculated, Eq. 2 is used to calculate the selection probability pi for each
employed bee.
M
bees and onlooker bees to decide whether to accept new solution. This strategy,
helped by scout bees, can maintain enough diversification for numerical prob-
PT
lems. But for the TSP, due to its discreteness, greedy acceptance strategy will
easily lead to premature convergence. To tackle this problem, we use accep-
tance criterion of TA method to accept new solution. The threshold acceptance
CE
14
ACCEPTED MANUSCRIPT
T
how to adjust its value throughout the search. To simplify the implementation
IP
of HDABC algorithm, we use the idea of list-based TA[34],[35]. In list-based
TA, the initial value of threshold and the percentage of the threshold reduc-
CR
265 tion are determined by the algorithm automatically, without the intervention of
the user[20]. Specifically, a list of threshold is created first, and then, in each
generation, the maximum value in the list is used as current threshold T to be
US
used in inequality 10. The threshold list is updated adaptively according to the
topology of the solution space of the problem.
AN
270 3.4. Implementation of HDABC Algorithm
The basic framework of HDABC algorithm is almost the same as ABC al-
gorithm described in algorithm 1. We update the algorithm 1 in two aspects.
M
The first update is that a threshold list will be created for each employed bee.
Algorithm 3 is used to create initial threshold list. As showed in line 4 of algo-
ED
275 rithm 3, normalized threshold value is stored in list. The second update is that
the order of statement 8 and statement 9 are swapped in HDABC algorithm.
This means the selection probability will be recalculated for each onlooker bee.
PT
Because onlooker bee may update employed bee, this strategy is more consistent
with current status of bee colony. Because HDABC algorithm uses threshold
acceptance criterion to accept candidate solution, therefore algorithm 2, which
CE
280
stores the times bad solution is accepted, and variable total stores the sum of
normalized difference of cost. If there is any bad solution accepted, the average
285 of normalized difference of cost will be used to replace the maximum value in
threshold list.
15
ACCEPTED MANUSCRIPT
T
4: Insert |f (y) − f (x)| /f (x) into list.
5: if y is better than x then
IP
6: x = y.
CR
7: end if
8: Return the created threshold list.
9: end while
XIT1083 and DKF3954 instances from VLSI data sets. The best known integer
solution of those instances are 1621, 2513, 3558 and 12538 respectively. The
CE
iteration times of algorithm 1 were 1000 and the population size was 15. The
300 length of threshold list was 300. Percentage error of average solution is used to
compare the performances of HDABC algorithm with different strategies. The
AC
following experiments were run on an Intel Core i7-5600 laptop, with 2.60 GHz
and a RAM of 8 GB. Java was used as the programming language.
16
ACCEPTED MANUSCRIPT
T
Algorithm 4 Search new solution for HDABC(t)
1: T =The maximum threshold value of threshold list.
IP
2: total=0, counter=0.
CR
3: repeat
4: Produce candidate solution y using Eq. 6.
5: Calculate tour length f (y) of the candidate solution y.
6:
7:
8:
if f (y) − f (x) < T ∗ f (x) then
x = y.
if f (y) − f (x) > 0 then
US
AN
9: total = total + (f (y) − f (x))/f (x).
10: counter = counter + 1.
11: end if
M
12: end if
13: until the statements repeat t times
ED
17: end if
18: if solution has been improved then
19: Reset limit to 0.
CE
20: else
21: Increase its limit by 1.
AC
22: end if
17
ACCEPTED MANUSCRIPT
T
DKF3954 12538 179.78 9.45 3.15 2.88
IP
4.1. The Importance of Learning
CR
305 When we use the solution updating Eq. 6 to produce a candidate solution,
the selection of the second item, edge which will be added to current solution,
is guided by other bees and guided by the knowledge of problem at hand. In
US
order to argument the importance of learning from neighbor and learning from
problem, we implement three variants of HDABC algorithms and compare their
AN
310 performances with HDABC. The first variant (HDABC-1) do not use any learn-
ing strategy, it will randomly select an edge from all edges leading by city i. The
second variant (HDABC-2), which only learns from problem, will biasedly select
M
an edge from nearest city list of city i. Those nearer cities will have higher prob-
ability to be selected. The third variant (HDABC-3), which only learns from
315 other bee, will use the corresponding edge of a randomly selected employed bee.
ED
In case the selected edge is in the current solution, an edge will be randomly
selected from all edges leading by city i. Tab. 1 is the simulation results. It
clearly shows that performance of HDABC algorithm without any learning is
PT
far worse than HDABC algorithm with learning. Learning from other bees is
320 superior to learning from problems. Learning both from other bees and from
CE
18
ACCEPTED MANUSCRIPT
selection strategies i.e., roulette wheel selection, rank selection, tournament se-
lection and disruptive selection in the HDABC algorithm with an aim to find
the basic principle on how to select a suitable selection strategy and on how to
330 select parameters for a selection strategy. Because different selection strategy
T
has different selection pressure and the suitable selection pressure depends on
IP
the computation resource used, a rational hypothesis is that the relative mer-
its among those selection strategies depend on the computation resource used.
CR
To verify this hypothesis, we use two experiments with different computation
335 resource. In the experiment with high computation resource, bee updates so-
lution N/10 times in algorithm 4, where N is city number of TSP instance.
US
In the experiment with low computation resource, bee updates solution N/100
times in algorithm 4. Fig. 2 is the performances comparison of the four selection
strategies. Fig. 2 clearly show that the relative merits among different selection
AN
340 strategies depend on the computation resource used. For example, when low
computation resource is used, disruptive selection has best performance and
roulette wheel selection has worst performance. But when high computation
M
resource is used, roulette wheel selection has best performance and disruptive
selection has worst performance.
ED
PT
CE
AC
19
ACCEPTED MANUSCRIPT
345 To further analyze the relation between performances and selection pres-
sure, we compare the performances of HDABC algorithm using linear ranking
with different selection pressure. As shown above, two different computation
resources are used. The selection pressure parameter sp in Eq. 8 is set from 1
T
to 2 with a step 0.1. Bigger sp has higher selection pressure. Fig. 3 is the per-
IP
350 formances comparison of different selection pressure for linear ranking. Fig. 3
clearly shows that suitable selection pressure depends on the computation re-
CR
source used. For example, when low computation resource is used, high selection
pressure has better performance as showed in Fig. 3 (a). But when high compu-
tation resource is used, low selection pressure has better performance as showed
355 in Fig. 3 (b).
US
AN
M
ED
PT
For basic ABC algorithm, which uses greedy acceptance criterion, scout
AC
contributes to the diversification of bee colony and improves the global search
ability for continuous optimization problem. For the TSP, although scout do
360 enhance the diversification of bee colony, this diversification cannot enhance
its global search ability effectively. We compare the performances of HDABC
20
ACCEPTED MANUSCRIPT
T
algoritm with greedy acceptance criterion and HDABC algoritm with threshold
IP
acceptance criterion in Tab. 2. Simulation results clearly show that threshold
acceptance criterion is far better than greedy acceptance criterion. A possible
CR
365 reason is that the new produced solution of scout is far worse than other bees,
so it contributes little to guide other bees and contributes little to search better
solutions also.
results. It shows that low limit is apparently not suitable, because the frequent
usage of scout will lead bees into stagnation. When limit is not less than 100,
simulation results show there is no significant difference among different limit in
ED
375 most cases. There are two reasons that lead to the useless of scout. One is due
to the characteristic of solution updating equation adopted. When the target
solution and the guiding solution have same edge, it will select a different edge
PT
from nearest neighbor list. This strategy can provide some diversification for
HDABC algorithm. Another reason is that, although creating a new solution
CE
380 can enhance the diversification of ABC algorithm, this enhancement contributes
little to improve the global search ability of HDABC algorithm.
AC
21
ACCEPTED MANUSCRIPT
T
IP
CR
US
Figure 4: Comparison of different scout trigger parameter limit for HDABC
AN
5. Competitiveness of HDABC algorithm
error of average solution (PE) are used to compare the performances of different
390 algorithms. Wilcoxon signed ranks test is used to compare the PE of HDABC
algorithm and other algorithms.
CE
22
ACCEPTED MANUSCRIPT
T
3 Kroa150 26524 26731 2.37 26524 0.14
IP
4 Ts225 126643 127004 0.64 126643 0
5 Rd400 15281 15798 5.16 15354 1.07
6 D657 48912 50653 4.36 49250 1.06
CR
7 Pr1002 259045 269702 5.42 262554 1.67
8 Pr2392 378032 395710 6.13 383792 2.07
US
In the 2-opt-ABC algorithm, the control parameters were set to 40, 2000,
and 100 for population size, maximum generation and limit, respectively. Runs
were repeated 50 times for each problem instance. The best solution and PE
AN
400
were given in Tab. 3. Tab. 3 clearly shows that HDABC algorithm outperform
2-opt-ABC algorithm on all 8 instances. Wilcoxon signed ranks test is used to
compare HDABC algorithm and 2-opt-ABC algorithm. The computed R+ , R− ,
M
and p-value are 36, 0, and 0.012 respectively. It means that HDABC algorithm
405 is significantly better than 2-opt-ABC algorithm.
ED
N
P = ∗2 (11)
2
CE
500. The population size P was equal to N . If the N was an odd number,
410 the P was increased by 1. Runs were repeated 20 times for each problem
instance. The best float solution and PE were given in Tab. 4. Wilcoxon signed
ranks test is used to compare HDABC algorithm with DBAC and ACO-ABC
algorithms. For HDABC algorithm and DABC algorithm, the computed R+ ,
23
ACCEPTED MANUSCRIPT
T
3 Berlin52 7544.37 7544.37 0 7544.37 0 7544.37 0
IP
4 St70 677.11 677.11 0.55 687.24 3.47 677.11 0.05
5 Eil76 545.39 - - 551.07 2.31 544.37 -0.11
6 Pr76 108159.4 108159.4 0.43 113798.6 6.39 108159.4 0
CR
7 Kroa100 21285.44 21285.44 0.98 22122.75 5.40 21285.44 0.01
8 Eil101 642.31 653.30 2.90 672.71 6.39 640.21 0.15
9 Ch150 6532.58 - - 6641.69 2.21 6530.90 0.22
10
11
Tsp225
A280
3859.00
2586.77
4122.50
2812.68
US
8.07
11.27
4090.54
-
7.74
-
3861.92
2586.77
0.93
0.24
AN
R− , and p-value are 45, 0, and 0.018 respectively. It means that HDABC
415 algorithm is significantly better than DABC algorithm. For HDABC algorithm
and ACO-ABC algorithm, the computed R+ , R− , and p-value are 55, 0, and
M
24
ACCEPTED MANUSCRIPT
table, optimal tour lengths of the instances and the results of IDBA algorithm
are taken from [24]. Among the 22 instances, HDABC algorithm can find the
optimal solution on 19 instances, and can always find the optimal solution on
14 instances. The average PE of HDABC algorithm, HDABC-G algorithm, and
T
435 IDBA algorithm are 0.07, 1.16, and 1.83 respectively. Wilcoxon signed ranks test
IP
is used to compare HDABC algorithm and IDBA algorithm. The computed R+ ,
R− , and p-value are 253, 0, and 8.84 ∗ 10−5 respectively. It means that HDABC
CR
algorithm is significantly better than IDBA algorithm. Wilcoxon signed ranks
tests also show that HDABC algorithm significantly outperforms HDABC-G
440 algorithm and HDABC-G algorithm significantly outperforms IDBA algorithm.
US
To compare the convergence behaviour of HDABC algorithm, HDABC-G al-
gorithm, and IDBA algorithm, the average number of objective function eval-
uations (Eval) needed to reach the final solution (in thousands), the average
AN
running time (in seconds), and the average iterations (Iter) needed to reach the
445 final solution for HDABC-G algorithm and HDABC algorithm were given in
Tab. 6. Tab. 6 shows that the convergence behaviour of HDABC algorithm is
M
quite different from IDBA algorithm and HDABC-G algorithm. IDBA algo-
rithm and HDABC-G algorithm, which both use greedy acceptance criterion,
ED
can have quicker convergence speed than HDABC algorithm in general. But
450 they may be easier to be strapped into local minimum than HDABC algorithm.
HDABC algorithm, which uses threshold acceptance criterion, can control its
PT
for each instance such that the CPU time of those algorithms is less than that of
ASA-GS algorithm. As ASA-GS algorithm, HDABC algorithm and MSA-IBS
algorithm were executed 5 trials on each instance, and the results are listed in
460 Tab. 7. As can be seen in Tab. 7, the average PE of HDABC algorithm for
all instances is 0.69, which is better than 1.87 of ASA-GS algorithm, and is
25
ACCEPTED MANUSCRIPT
T
IP
Table 5: Performance comparison among IDBA, HDABC-G, and HDABC algorithms
IDBA HDBAC-G HDABC
CR
No. Instance Optimal
Best PE Best PE Best PE
1 Oliver30 420 420 0 420 0 420 0
2 Eilon50 425 425 0.56 425 0.73 425 0
US
3 Eil51 426 426 0.49 426 0.54 426 0
4 Berlin52 7542 7542 0 7542 0.21 7542 0
5 St70 675 675 0.6 675 0.39 675 0
6 Eilon75 535 535 2.27 537 1.65 535 0
AN
7 Eil76 538 539 1.84 544 1.95 538 0
8 KroA100 21282 21282 0.76 21282 0.18 21282 0
9 KroB100 22140 22140 1.63 22199 0.98 22141 0.13
10 KroC100 20749 20749 1.43 20749 0.37 20749 0
M
26
ACCEPTED MANUSCRIPT
T
IP
Table 6: Convergence comparison among IDBA, HDABC-G, and HDABC algorithms
IDBA HDBAC-G HDABC
CR
No.
Eval Time Eval Time Iter Eval Time Iter
1 2.17 0.4 14.13 0.13 15.7 211.68 0.15 235.2
2 22.8 1.5 82.5 0.2 55 593.63 0.24 395.75
US
3 15.37 1.7 71.68 0.21 46.85 610.09 0.24 398.75
4 20.07 2.1 45.47 0.21 29.15 474.79 0.25 304.35
5 72.67 3.9 173.25 0.3 82.5 1135.16 0.34 540.55
6 116.56 4.5 177.64 0.33 78.95 948.38 0.4 421.5
AN
7 91.53 5.1 217.06 0.32 95.2 926.59 0.45 406.4
8 739.86 10.6 350.55 0.47 116.85 1634.4 0.51 544.8
9 461.05 11.1 1101.3 0.48 367.1 2004 0.51 668
10 872.51 12.0 417.9 0.47 139.3 1412.25 0.52 470.75
M
27
ACCEPTED MANUSCRIPT
better than 0.86 of MSA-IBS algorithm. Wilcoxon signed ranks test is used
to compare HDABC algorithm, MSA-IBS algorithm and ASA-GS algorithm.
For HDABC algorithm and ASA-GS algorithm, the computed R+ , R− , and
465 p-value are 817, 3, and 1.46 ∗ 10−7 respectively. It means that HDABC algo-
T
rithm is significantly better than ASA-GS algorithm. For HDABC algorithm
IP
and MSA-IBS algorithm, the computed R+ , R− , and p-value are 686, 134, and
0.005 respectively. It means that HDABC algorithm is significantly better than
CR
MSA-IBS algorithm also.
470 6. Conclusions
US
ABC algorithm is a smart swarm intelligence algorithm which has splendid
collective behaviors and is first proposed for continuous optimization problems.
AN
Even though ABC algorithm is especially simple and easy to implement, it
is non-trivial to use it for new applications, especially for combinatorial opti-
475 mization problems. Suitable strategies should be used to guarantee that the
M
designed solution updating equation must keep the excellent features and the
integrated effect of selection strategy, acceptance strategy, and scout trigger
parameter limit must obtain good balance between intensification and diver-
ED
sification for ABC algorithm. Guided by those principles, this paper presents
480 a hybrid discrete ABC algorithm with threshold acceptance criterion for the
PT
TSP. Experiment results show that the new solution updating equation of HD-
ABC algorithm, which can learn both from other bees and from problem at
hand, can search the solution space of the TSP intelligently and efficiently. Due
CE
28
ACCEPTED MANUSCRIPT
T
3 Krob150 26130 0.18 10.9 -0.01 2.95 -0.01 1.82
4 Pr152 73682 0.01 10.85 0 3.04 0 1.89
IP
5 U159 42080 0.75 11.49 -0.01 3.12 -0.01 2.01
6 Rat195 2323 1.07 14.37 0.6 3.4 0.61 2.18
CR
7 D198 15780 0.41 14.6 0.24 3.58 0.27 2.28
8 Kroa200 29368 0.23 14.26 0 3.56 0.05 2.4
9 Krob200 29437 0.25 14.24 0.03 3.55 0.02 2.31
10 Ts225 126643 0 16.05 0 4.1 0 2.78
11 Pr226 80369 0.39 16.7 0 4.33 0 2.84
12
13
14
15
Gil262
Pr264
Pr299
Lin318
2378
49135
48191
42029
0.86
0
0.28
0.84
US
19.43
19.09
21.94
23.35
0.42
0.18
0.04
0.18
4.86
4.31
4.91
4.36
0.38
0
0.11
0.26
3.42
2.85
3.7
3.52
AN
16 Rd400 15281 0.97 30.4 0.2 6.04 0.26 4.98
17 Fl417 11861 1.54 32.02 1.15 7.08 1.01 5.61
18 Pr439 107217 2.8 34.92 0.36 6.98 0.22 5.68
19 Pcb442 50778 0.96 35.75 0.08 6.93 0.15 5.93
M
29
ACCEPTED MANUSCRIPT
selection schemes with high selection pressure. Simulation results confirm the
competitiveness of HDABC algorithm. Furthermore, the designing principles
and the analysis procedure of proposed HDABC algorithm can be used to guide
the design and implementation of other swarm intelligence algorithm for discrete
T
495 optimization problems.
IP
Acknowledgement
CR
This work was supported by the Nature Science Foundation of Fujian Province
of P. R. China under Grant No. 2014J01219, No. 2015J01233, No. 2016J01280,the
Major Projects of Regional Development of Fujian Province of P. R. China un-
500
US
der Grant No. 2015N3011, and the Special Fund for Scientific and Technolog-
ical Innovation of Fujian Agriculture and Forestry University under Grant No.
AN
CXZX2016026, No. CXZX2016031.
References
M
[2] B. Akay, E. Aydogan, L. Karacan, 2-opt based artificial bee colony algo-
rithm for solving traveling salesman problem, in: 2nd World Conference on
PT
[4] L. Bao, J.-c. Zeng, Comparison and analysis of the selection mechanism in
the artificial bee colony algorithm, in: Hybrid Intelligent Systems, 2009.
515 HIS’09. Ninth International Conference on, vol. 1, IEEE, 2009.
30
ACCEPTED MANUSCRIPT
T
520 tion algorithm appearing superior to simulated annealing, Journal of com-
IP
putational physics 90 (1) (1990) 161–175.
CR
[7] J. B. Escario, J. F. Jimenez, J. M. Giron-Sierra, Ant colony extended:
experiments on the travelling salesman problem, Expert Systems with Ap-
plications 42 (1) (2015) 390–410.
525
US
[8] K. Z. Gao, P. N. Suganthan, Q. K. Pan, T. J. Chua, C. S. Chong, T. X. Cai,
An improved artificial bee colony algorithm for flexible job-shop scheduling
problem with fuzzy processing time, Expert Systems with Applications 65
AN
(2016) 52–67.
530 Artificial bee colony algorithm for scheduling and rescheduling fuzzy flexible
job shop problem with new job insertion, Knowledge-Based Systems 109
(2016) 1–16.
ED
[10] X. Geng, Z. Chen, W. Yang, D. Shi, K. Zhao, Solving the traveling salesman
problem based on an adaptive simulated annealing algorithm with greedy
PT
[13] H. Ismkhan, Effective heuristics for ant colony optimization to handle large-
scale problems, Swarm and Evolutionary Computation 32 (2017) 140–149.
31
ACCEPTED MANUSCRIPT
[14] D. Karaboga, An idea based on honey bee swarm for numerical optimiza-
545 tion, Tech. rep., Technical report-tr06, Erciyes university, engineering fac-
ulty, computer engineering department (2005).
T
survey: artificial bee colony (abc) algorithm and applications, Artificial
IP
Intelligence Review 42 (1) (2014) 21–57.
CR
550 [16] D. Karaboga, S. Okdem, C. Ozturk, Cluster based wireless sensor net-
work routing using artificial bee colony algorithm, Wireless Networks 18 (7)
(2012) 847–860.
US
[17] D. Karaboga, C. Ozturk, N. Karaboga, B. Gorkemli, Artificial bee colony
programming for symbolic regression, Information Sciences 209 (2012) 1–
AN
555 15.
560 local search for traveling salesman problem, Cybernetics and Systems 45 (8)
(2014) 635–649.
PT
565 [21] Z. Li, Z. Zhou, X. Sun, D. Guo, Comparative study of artificial bee colony
algorithms with heuristic swap operators for traveling salesman problem,
AC
[22] M. Mahi, Ö. K. Baykan, H. Kodaz, A new hybrid method based on parti-
cle swarm optimization, ant colony optimization and 3-opt algorithms for
570 traveling salesman problem, Applied Soft Computing 30 (2015) 484–490.
32
ACCEPTED MANUSCRIPT
T
575 proved discrete bat algorithm for symmetric and asymmetric traveling
IP
salesman problems, Engineering Applications of Artificial Intelligence 48
(2016) 59–71.
CR
[25] A. Ouaarab, B. Ahiod, X.-S. Yang, Discrete cuckoo search algorithm for the
travelling salesman problem, Neural Computing and Applications 24 (7-8)
580 (2014) 1659–1669.
US
[26] C. Ozturk, E. Hancer, D. Karaboga, Dynamic clustering with improved
AN
binary artificial bee colony algorithm, Applied Soft Computing 28 (2015)
69–80.
585 algorithm based on genetic operators, Information Sciences 297 (2015) 154–
170.
ED
bee colony for traveling salesman problem, in: 4th International Conference
on Electronics Computer Technology (ICECT), April 2012, 2013.
AC
[30] Y. Saji, M. E. Riffi, A novel discrete bat algorithm for solving the travelling
595 salesman problem, Neural Computing and Applications 27 (7) (2016) 1853–
1866.
33
ACCEPTED MANUSCRIPT
[32] P. Shi, S. Jia, A hybrid artificial bee colony algorithm combined with simu-
T
600
IP
Science and Cloud Computing Companion (ISCC-C), 2013 International
Conference on, IEEE, 2013.
CR
[33] M. Shokouhifar, A. Jalali, H. Torfehnejad, Optimal routing in traveling
605 salesman problem using artificial bee colony and simulated annealing, in:
US
1st National Road ITS Congress, 2015.
(2015) 336–353.
and its application to the traveling salesman problem, Applied Soft Com-
puting 43 (2016) 415–423.
AC
620 [38] L. Wang, G. Zhou, Y. Xu, S. Wang, M. Liu, An effective artificial bee colony
algorithm for the flexible job-shop scheduling problem, The International
Journal of Advanced Manufacturing Technology 60 (1-4) (2012) 303–315.
34
ACCEPTED MANUSCRIPT
[39] X. Xu, X. Lei, Multiple sequence alignment based on abc sa, in: Interna-
tional Conference on Artificial Intelligence and Computational Intelligence,
625 Springer, 2010.
[40] Z. Xu, Y. Wang, S. Li, Y. Liu, Y. Todo, S. Gao, Immune algorithm com-
T
bined with estimation of distribution for traveling salesman problem, IEEJ
IP
Transactions on Electrical and Electronic Engineering 11 (S1) (2016) S142–
S154.
CR
630 [41] W. Yong, Hybrid max–min ant system with four vertices and three lines
inequality for traveling salesman problem, Soft Computing 19 (3) (2015)
585–596.
US
[42] Y. Zhou, Q. Luo, H. Chen, A. He, J. Wu, A discrete invasive weed optimiza-
AN
tion algorithm for solving traveling salesman problem, Neurocomputing 151
635 (2015) 1227–1236.
[43] Y. Zhou, X. Ouyang, J. Xie, A discrete cuckoo search algorithm for travel-
M
35