You are on page 1of 36

Accepted Manuscript

Hybrid Discrete Artificial Bee Colony Algorithm with Threshold


Acceptance Criterion for Traveling Salesman Problem

Yiwen Zhong, Juan Lin, Lijin Wang, Hui Zhang

PII: S0020-0255(17)30274-8
DOI: 10.1016/j.ins.2017.08.067
Reference: INS 13072

To appear in: Information Sciences

Received date: 21 January 2017


Revised date: 15 August 2017
Accepted date: 20 August 2017

Please cite this article as: Yiwen Zhong, Juan Lin, Lijin Wang, Hui Zhang, Hybrid Discrete Artificial Bee
Colony Algorithm with Threshold Acceptance Criterion for Traveling Salesman Problem, Information
Sciences (2017), doi: 10.1016/j.ins.2017.08.067

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service
to our customers we are providing this early version of the manuscript. The manuscript will undergo
copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please
note that during the production process errors may be discovered which could affect the content, and
all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT

Hybrid Discrete Artificial Bee Colony Algorithm with


Threshold Acceptance Criterion for Traveling Salesman
Problem

Yiwen Zhonga,∗, Juan Lina,b , Lijin Wanga , Hui Zhangb

T
a College of Computer and Information Science, Fujian Agriculture and Forestry University,

IP
Fuzhou, 350002 China
b Department of Computer Engineering and Computer Science, University of Louisville,

KY, 40217 USA

CR
Abstract

US
Artificial bee colony (ABC) algorithm, which has explicit strategies to balance
intensification and diversification, is a smart swarm intelligence algorithm and
AN
was first proposed for continuous optimization problems. In this paper, a hybrid
discrete ABC algorithm, which uses acceptance criterion of threshold accepting
method, is proposed for Traveling Salesman Problem (TSP). A new solution
M

updating equation, which can learn both from other bees and from features of
problem at hand, is designed for the TSP. Aiming to enhance its ability to escape
ED

from premature convergence, employed bees and onlooker bees use threshold ac-
ceptance criterion to decide whether or not to accept newly produced solutions.
Systematic experiments were performed to show the advantage of the new solu-
PT

tion updating equation, to verify the necessity of using non-greedy acceptance


strategy for keeping sufficient diversity, to compare different selection schemes
for onlooker bees, and to analyze the contribution of scout bee. Comparison ex-
CE

periments performed on a wide range of benchmark TSP instances have shown


that the proposed algorithm is better than other ABC-based algorithms and is
AC

better than or competitive with many other state-of-the-art algorithms.


Keywords: Artificial bee colony algorithm, Traveling salesman problem,

∗ Correspondingauthor
Email address: yiwzhong@fafu.edu.cn (Yiwen Zhong)

Preprint submitted to Information Sciences August 21, 2017


ACCEPTED MANUSCRIPT

Threshold acceptance criterion, Collective behavior, Selection schemes

1. Introduction

Swarm intelligence (SI) can be defined as the collective behavior of decen-

T
tralized and self-organized swarms. Two fundamental concepts considered as

IP
necessary properties of SI are self-organization and division of labor[1]. Honey
bee swarms, which clearly have strong self-organization and division of labor

CR
5

features, have inspired many SI algorithms. Among those algorithms, artificial


bee colony (ABC) is the one that has been most widely studied and applied

10
US
to solve real world problems[15]. ABC algorithm, which simulates foraging be-
havior of honey bees, was invented by Karaboga [14] and was first described
for solving numerical optimization problems[5]. Although numerical problems
AN
are still active research fields for ABC, more and more variants have been in-
troduced for discrete and combinatorial problems, such as 0-1 knapsack[27],
multiple sequence alignment[39], wireless sensor network routing[16], symbolic
M

regression[17], job-shop scheduling[38], flexible job-shop scheduling problem[8,


15 9], dynamic clustering[26] and feature selection[12], etc.
The traveling salesman problem (TSP) is a well-known NP-complete com-
ED

binatorial optimization problem. The purpose of the TSP is to find the short-
est tour visiting each city once and only once. As an NP-complete problem,
PT

the TSP’s computational complexity increases exponentially with its number of


20 cities. For many real-world TSP applications such as data association, vehicle
routing, data transmission in computer networks, scheduling, and transporta-
CE

tion and logistics, finding suboptimal solutions with a reasonable cost may be
more advantageous, and therefore a lot of interests have been focused on using
AC

efficient meta-heuristics to solve the TSP. In recent years, many meta-heuristics


25 have been proposed to solve the TSP, such as ant algorithm [13, 41, 7], genetic
algorithm (GA) [37, 28], particle swarm optimization (PSO) [3], cuckoo search
(CS) [25, 43], bee-inspired algorithm [19, 33], bat algorithm (BA) [24, 30], firefly
algorithm [31], immune algorithm [40], invasive weed optimization [42], African

2
ACCEPTED MANUSCRIPT

buffalo optimization [23], simulated annealing (SA) algorithms [10, 36], and
30 some hybrid algorithms [11, 22] etc.
ABC algorithm is an outstanding SI algorithm for continuous optimization
problems. There are several features contributing to its success. The solution

T
updating equation, which cooperates with other bees and updates one variable

IP
each time, can search solution space intelligently in a much fine-grained manner.
35 The greedy acceptance strategy and the onlooker can enhance its intensification

CR
ability, and the scout can enlarge its diversification. Working together, those
strategies can produce good balance between intensification and diversification
for continuous optimization problems. Although the basic ABC algorithm is

40 US
simple and easy to implement, applying ABC algorithm to combinatorial prob-
lems is not a simple task. We must carefully redesign the solution updating
equation for problems at hand, so the solution updating equation can retain its
AN
good features. Next we must recheck the integrated effect of greedy acceptance
strategy, onlooker, and scout, so a good balance between intensification and
diversification can be obtained in the search process.
M

45 Aiming to study the basic principles of how to extend ABC algorithm for
discrete optimization problems, this paper presents a hybrid discrete ABC (HD-
ED

ABC) algorithm for the TSP. In HDABC algorithm, a novel solution updating
equation, which can learn both from other bees and from features of the TSP, is
designed to produce new solution for employed bees and onlooker bees. Because
PT

50 the scout bee alone is not effective enough for HDABC algorithm to keep suf-
ficient diversification for the TSP, greedy acceptance strategy is replaced with
CE

threshold acceptance criterion of Threshold Accepting (TA) method to further


balance intensification and diversification. Systematic experiments were per-
formed to analyze the behaviours of HDABC algorithm. The performance of
AC

55 HDABC algorithm was compared with other state-of-the-art meta-heuristics on


a wide range of benchmark TSP instances.
The remainder of this paper is organized as follows: Section 2 provides a
short description of basic ABC algorithm, the TSP, and meta-heuristics for the
TSP. Section 3 presents our proposed HDABC algorithm. Section 4 analyzes the

3
ACCEPTED MANUSCRIPT

60 behaviors of HDABC Algorithm. Section 5 compares the performance of HD-


ABC algorithm with some other state-of-the-art algorithms on a large number
of TSP instances. Finally, in section 6 we summarize our study.

T
2. Related Works

IP
This section introduces basic ABC algorithm, the TSP, and meta-heuristics
65 for the TSP. In section 2.1, the principles and pseudocode of basic ABC al-

CR
gorithm are described in detail. Section 2.2 introduces the TSP and its goal.
Section 2.3 gives a simple survey of state-of-the-art meta-heuristics for the TSP
and analyzes the main advantages and disadvantages of current ABC algorithms
for the TSP.

2.1. ABC Algorithm


US
AN
70

ABC algorithm was first proposed by Karaboga in 2005[14] for solving nu-
merical optimization problems. Since ABC algorithm is simple to understand,
M

easy to implement, and has few control parameters, it has been widely used in
many fields. In ABC algorithm, artificial honey bees consist of employed bees,
onlooker bees and scout bees, among which the number of employed bees and
ED

onlooker bees are equal. A bee waiting on the dance area for making decision to
choose a food source, is called an onlooker bee; a bee going to the food source
visited by itself previously, is named an employed bee. A bee carrying out ran-
PT

dom search, is called a scout. The process of bees looking for food source is the
process of finding the optimum solution, each solution of optimization problem
CE

is considered as a food source position in the search space, the fitness of solution
represents the profitability of food source. Solution numbers N is equal to the
number of employed bees or the onlooker bees. First of all, the ABC algorithm
AC

randomly generates initial N solutions, each solution x is a vector of D dimen-


sion. An employed bee starts neighborhood search first, it produces candidate
position y by solution updating Eq. 1. The position of new food source will
replace the previous one if it is better than previous position; otherwise the

4
ACCEPTED MANUSCRIPT

previous position is kept.



 xik if k 6= d
yik = (1)
x + r ∗ (x − x ) otherwise
ik jk ik

T
where k = 1, 2, · · · , D, j 6= i and j ∈ {1, 2, · · · , N } is an randomly chosen
index, and d is a randomly selected dimension from {1, 2, · · · , D}. Moreover,

IP
r is a random number between [-1, 1]. The solution updating equation has

CR
two main features, one is that it updates one dimension only, and the other is
75 that it learns from another randomly selected bee xj . One dimension updating
guarantees that bees can search the solution space in a much find-grained way.

US
Learning from other bee, which is achieved by using difference between itself and
another bee to update solution, not only can produce an intelligent and adaptive
neighborhood structure, but also is the main origin of collective behavior of
AN
80 swarm.
After all the employed bees have completed search process, they share the
nectar information of the food sources with the onlooker bees on the dance area.
M

An onlooker bee evaluates the nectar information taken from all employed bees,
and then it chooses a food source by using a selection probability. The higher
ED

85 the solution’s fitness is, the higher its selection probability is. The selection
probability is described in Eq. 2, where f it(i) is the fitness of ith solution and
N is the number of solution. After selecting the food source, onlookers start to
PT

carry out the exploitation process using Eq. 1 like employed bees.

f it(i)
pi = PN (2)
CE

j=1 f it(j)

In ABC algorithm, if a solution cannot be improved further through a pre-


determined number of cycles, i.e. the limit, then that solution is assumed to be
AC

abandoned, and the employed bee becomes a scout. Suppose solution xi is to


be abandoned, then a new solution produced randomly by Eq. 3 would replace
xi :
xij = xj,min + r ∗ (xj,max − xj,min ) (3)

5
ACCEPTED MANUSCRIPT

where xj,min is the lower bound of dimension j and xj,max is the upper bound
90 of dimension j.
The ABC algorithm’s pseudocode is listed in algorithm 1 and algorithm 2.
Algorithm 1 is the basic framework of ABC algorithm, and algorithm 2 is used

T
by employed bee and onlooker bee to update solution. The parameter t of

IP
algorithm 2 represents how many times a bee will use solution updating equation
95 to produce new solutions in each generation. For basic ABC algorithm, the

CR
parameter t is set to 1.

2.2. The TSP Problem

US
TSP is one of the most famous hard combinatorial optimization problems.
It belongs to the class of NP-complete optimization problems. This means that
no known polynomial time algorithms can guarantee it find its global optimal
AN
solution. Consider a salesman who has to visit n cities. The object of TSP is to
find a shortest tour through all the cities such that no city is visited twice and
the salesman returns to the starting city at the end of the tour. It can be defined
M

as follows. For a n cites problem, we can use a distance matrix D = (di,j )n∗n
to store the distances between all the pairs of cites,where each element di,j of
ED

matrix D represents the distance between city i and j. We can use a linked list
to represent a solution, and use a vector x to implement the linked list. Each
element xi in x represents an edge from city i to city xi . The goal is to find a
PT

solution x that minimizes


n
X
f (x) = di,xi (4)
i=1
CE

TSP may be symmetric or asymmetric. If symmetric, the distance between


two cities is identical in each opposite direction, forming an undirected graph.
100 This symmetry halves the number of possible solutions.
AC

2.3. Meta-heuristics for the TSP problem

In recent years, many meta-heuristics have been proposed for the TSP.
Ismkhan [13] proposed a new ant colony optimization (ACO) algorithm with

6
ACCEPTED MANUSCRIPT

Algorithm 1 The basic framework of ABC Algorithm

T
//Initialization Phase

IP
1: Setup the parameters, such as limit and parameter t in Algorithm 2.
2: Produce the initial solutions using Eq. 3.

CR
3: Calculate fitness of the solutions.
4: while termination condition is not met do
//Employed Bee Phase
5:

6:
for each Employed Bee do
Search new solution(t)
US
AN
7: end for
//Onlooker Bee Phase
8: Calculate selection probabilities of the employed bees using Eq. 2.
M

9: for each Onlooker Bee do


10: Use the calculated selection probability to select an employed bee.
11: Search new solution(t)
ED

12: end for


13: Save the best solution obtained so far.
//Scout Bee Phase
PT

14: if a scout bee occurs then


15: Produce a new solution by using Eq. 3.
CE

16: Calculate fitness of the produced solution.


17: Reset its limit to 0.
18: end if
AC

19: end while

7
ACCEPTED MANUSCRIPT

Algorithm 2 Search new solution(t)


1: repeat

2: Produce candidate solution y using Eq. 1.


3: Calculate fitness of the candidate solution y.

T
4: if the fitness of candidate solution is better than current solution x then
5: x = y.

IP
6: end if

CR
7: if solution has been improved then
8: Reset limit to 0.
9: else
10:

11:

12:
Increase its limit by 1.
end if US
until the statements repeat t times
AN
three effective strategies. The three strategies include pheromone representa-
105 tion with linear space complexity, new next city selection, and pheromone aug-
M

mented 2-opt local search. Yong [41] proposed a hybrid Max-Min ant system
which is integrated with a four vertices and three lines inequality. The four
ED

vertices and three lines inequality is used as a local search strategy for each
ant. Escario et al. [7] proposed an ant colony extended algorithm which in-
110 cludes self-organization property. This self-organization property is based on
PT

task division and an emergent task distribution based on the feedback provided
by the results of ants’ searches. Wang et al. [37] proposed multi-offspring GA
where crossover operator and mutation operator generate more offspring than
CE

classical methods. Pau et al. [28] studied the performance of permutation-coded


115 GA with different population seeding techniques. Akhand et al. [3] presented
AC

a velocity tentative PSO where velocity is represented as a swap sequence (SS)


consisting of several swap operators (SOs). Ouaarab et al. [25] proposed an
improved discrete CS algorithm where 2-opt move and double-bridge move are
used to generate new solution. Furthermore, a fraction of smart cuckoos are se-
120 lected to be improved by local search. Zhou et al. [43] proposed a novel discrete

8
ACCEPTED MANUSCRIPT

CS algorithm which uses learning operator, ‘A’ operator and 3-opt to acceler-
ate the convergence rate. Osaba et al. [24] proposed an improved discrete BA
(IDBA) which uses hamming distance to measure the distance between bats,
and 2-opt and 3-opt operators are used to improve solutions. Saji et al. [30]

T
125 proposed a novel discrete BA where two-exchange crossover operator is used to

IP
update solutions and 2-opt operator is used to improve solutions. Saraei et al.
[31] proposed a firefly algorithm which uses greedy swap to improve solutions.

CR
Xu et al. [40] proposed an immune algorithm combined with estimation of dis-
tribution algorithm (IA-EDA). In IA-EDA, a heuristic refinement local search
130 operator is proposed to repair the infeasible solutions. Zhou et al. [42] proposed

US
a discrete invasive weed optimization which uses 3-opt local search operator and
an improved complete 2-opt to generate new solution. Odili et al. [23] proposed
African buffalo algorithm for the TSP. Geng et al. [10] proposed an adaptive
AN
SA algorithm with greedy search (ASA-GS), where greedy search technique is
135 used to speed up the convergence rate. Wang et al. [36] proposed a multi-agent
SA with instance-based sampling (MSA-IBS) where instance-based sampling is
M

used to improve the efficiency of sampling. Mahi et al. [22] presented a hybrid
method, which used PSO algorithm, ACO algorithm, and 3-Opt heuristic. The
ED

PSO algorithm is used for detecting optimum values of parameters used for city
140 selection operations in the ACO algorithm. The 3-Opt algorithm is used to
further improve the best solution produced by the ACO algorithm.
PT

Several versions of ABC algorithm have been proposed for solving the TSP.
Kocer et al. [19] proposed an improved ABC algorithm which uses a loyalty func-
CE

tion and a threshold to decide whether a bee is employed bee or onlooker bee,
145 and employed bee will be enriched by 2-opt local search algorithm. Shokouhifar
et al. [33] presented a hybrid ABC algorithm where swap, inverse, relocation,
AC

and Or-opt are used to produce new solution and the result of ABC algorithm
is further enriched by SA algorithm. Akay et al. [2] proposed a 2-opt based
ABC algorithm where neighbor-based 2-opt move and 2-opt algorithm are used
150 to produce new solutions. Kiran et al. [18] proposed a discrete ABC (DABC)
where nine neighborhood operators are used as solution updating equations of

9
ACCEPTED MANUSCRIPT

basic ABC, and 2-opt or 3-opt heuristic approaches are used to enrich the results
obtained by DABC. Gndz et al. [11] proposed a hyhrid method (ACO-ABC)
where ACO is used to provide a better initial solution for the ABC. Sabet et
155 al. [29] proposed a hybrid mutation-based ABC algorithm where swap, insert,

T
and biased insert operators are used to produce new solutions. Li et al. [21]

IP
presented two efficient ABC algorithms with heuristic swap operators, where
nearest neighbor list is used to constrain the number of available cities. Shi and

CR
Jia [32] proposed a hybrid ABC algorithm which uses crossover and mutation
160 operator to produce new solution, and Metropolis acceptance criterion is used
to select a candidate solution from those two solutions produced by crossover
and mutation operators.
US
After analyzing the solution updating schemes used in variant ABC algo-
rithms for the TSP [19, 33, 18, 29, 21], we have found that in most of them,
AN
165 bees update solutions independently. It means those variants lose one of the
important origins of collective behavior for SI algorithms. Akay et al. [2] does
use neighbor-based 2-opt move which uses other bees to guide the solution up-
M

dating, however 2-opt algorithm, which is used to produce new solution in the
case the solution produced by neighbor-based 2-opt move is worse than current
ED

170 solution, is both time consuming and inefficient in this context. Furthermore,
the integration of greedy acceptance strategy and 2-opt algorithm, which uses
greedy strategy also, may deteriorate ABC’s diversification also. The crossover
PT

operator used in [32] can learn from other bees also, but the crossover operator
will change a large part of the solution, so it is not good at searching solution
CE

175 space finely. Shi and Jia [32] try to use the Metropolis acceptance criterion of SA
algorithm to balance intensification and diversification. But they use it to select
solution from those produced by crossover operator and those produced by mu-
AC

tation operator; it will dramatically weaken its effect because the selected result
still must compete with the current solution using greedy acceptance strategy.
180 Aiming to tackle those shortages, we propose the HDABC algorithm, which can
not only retain the collaborative learning ability and fine-grained search ability
of classic ABC algorithm, but can also get better balance between intensification

10
ACCEPTED MANUSCRIPT

and diversification by using threshold acceptance criterion.

3. Hybrid Discrete ABC Algorithm

This section presents the main ideas of the HDABC algorithm and the used

T
185

strategies. Section 3.1 explains the solution updating equation and its concrete

IP
implementation. Section 3.2 introduces the four selection operators used by
onlooker bees in detail. Section 3.3 describes the threshold acceptance criterion

CR
used by the HDABC algorithm. finally, section 3.4 gives the detail description
190 of pseudo codes for the HDABC algorithm.

3.1. Solution Updating Equation


US
The solution updating equation of basic ABC algorithm is designed for con-
tinuous optimization problems. For combinatorial problems, we must redesign
AN
this equation to both be consistent with the characteristics of problem at hand
195 and retain the good features of original updating equation. For the TSP, we
can use a linked list to represent the solution, and use a vector x to implement
M

the linked list. Each element xi in x represents an edge ei,xi from city i to city
xi . We define the minus operator of two elements xi and yi from solution x and
ED

y as follows:

 ei,yi if yi 6= xi
yi − xi = (5)
PT

e where k 6= yi and k ∈ nb(i),


i,k , else
where nb(i) is the nearest neighbors set of city i. Using the new minus operator
CE

defined above, we form solution updating equation as follows:

x = x + (yi − xi ) (6)
AC

200 where y is another solution randomly selected from swarm, and the plus operator
means to add edge, which is represented by the second operand, to the first
operand under the constrain that the validness of the first operand must be
retained. The philosophy behind this solution updating equation is that, in
case that solution x and y have a different next visiting city from city i, then

11
ACCEPTED MANUSCRIPT

205 solution x will try to learn from solution y by adding the next visiting city of
solution y into its solution. On the other hand, in case that x and y have same
next visiting city from city i, solution x will try to learn from problem at hand
by selecting a new city from the nearest city list of city i. Learning from other

T
bees can guarantee that HDABC algorithm has collective behavior, and learning

IP
210 from problem can improve its efficiency and speed up its convergence. Now the
problem is how to add a selected edge into a solution for the TSP.

CR
Wang et al. [36] systematically studied several adding strategies, such as
inverse, insert, swap, and their combination. Their study shows that greedy
hybrid operator has best performance. This is a kind of operator with multiple
215

US
neighbors, which selects the best one from those neighbors. Specifically, after an
edge ei,j is selected to be added into current solution, it uses inverse operator,
insert operator and swap operator to produce three neighbor solutions. And the
AN
best one is used as the candidate solution. The insert operator in [36] is a kind
of point insert operator, only the city j is moved to the back of city i. HDABC
220 uses a similar greedy hybrid operator where the insert operator is replaced with
M

block insert operator. In block insert operator, a block of city which is leaded by
j is moved to the back of i. Point insert operator is a special case of block insert
ED

operator where the size of block is 1. The length of block is randomly produced
for each insert operation. The block insert operator is described in Fig. 1, where
225 the three dotted lines represent the three replaced (replacing) edges.
PT
CE
AC

Figure 1: Block insert operator used by greedy hybrid operator

3.2. Selection Operator Used by Onlooker Bees

In ABC algorithm, onlooker bee is used to enhance its intensification ability.


To accomplish this task, onlooker will tend to select better employed bees to

12
ACCEPTED MANUSCRIPT

improve. It means those better employed bees will have a higher probability to
230 be selected. There may be several selection strategies, such as roulette wheel
selection, tournament selection, rank selection, and disruptive selection etc. Bao
and Zeng [4] compared the above four selection schemes in ABC algorithm for

T
continuous problem and they drew the conclusion that ABC algorithm with dis-

IP
ruptive selection, tournament selection, or rank selection performed better than
235 basic ABC algorithm with roulette wheel selection. In order to analyze the rela-

CR
tionship among performance, selection pressure and computation resource, this
paper will systematically compare those four selection schemes with different
computation resource in Sec. 4.2.
1) Roulette Wheel Selection
US
The probability pi , which is used to select ith employed bee, is calculated
by Eq. 2, where f it(i) is the fitness of ith employed bee and is computed as
AN
follows:
1
f it(i) = (7)
f (i)
M

240 where f (i) is the tour length of the solution of ith employed bee.
2) Tournament Selection
Tournament selection works as follows: Onlooker bee chooses some number
ED

n of employed bees randomly from the population and uses the best one. In
this paper, we use binary tournament where n is equal to 2.
3) Rank Selection
PT

245

In rank selection, population is sorted according to the objective values.


The fitness assigned to each employed bee depends only on its position in the
CE

sorted list and not on the actual objective value. Rank-based fitness assignment
overcomes the scaling problems of the proportional fitness assignment. Rank
selection introduces a uniform scaling across the population and provides a
AC

simple and effective way of controlling selection pressure. Rank selection may
use linear ranking or nonlinear ranking. In this paper, we will only study linear
ranking. Consider N the number of bees in the population, i the position of
employed bee in this population (the worst bee has i=1, the best bee i = N )

13
ACCEPTED MANUSCRIPT

and sp the selection pressure. In linear ranking, the fitness value for a bee is
calculated as:
(i − 1)
f it(i) = 2 − sp + 2 ∗ (sp − 1) ∗ (8)
(N − 1)
where the selection pressure sp is in [1.0, 2.0]. After the fitness f it(i) has

T
been calculated, Eq. 2 is used to calculate the selection probability pi for each

IP
employed bee.
4) Disruptive Selection

CR
Disruptive selection is a type of natural selection that selects against the
average individual in a population. It inclines to select both extremes, but little
in the middle. In disruptive selection, the fitness value for a bee is calculated
as:
US
f it(i) = f (i) − f¯ (9)
AN
250 where f¯ is the average value of objective function f . After the fitness f it(i) has
been calculated, Eq. 2 is used to calculate the selection probability pi for each
employed bee.
M

3.3. Acceptance Strategy

In basic ABC algorithm, greedy acceptance strategy is used by employed


ED

bees and onlooker bees to decide whether to accept new solution. This strategy,
helped by scout bees, can maintain enough diversification for numerical prob-
PT

lems. But for the TSP, due to its discreteness, greedy acceptance strategy will
easily lead to premature convergence. To tackle this problem, we use accep-
tance criterion of TA method to accept new solution. The threshold acceptance
CE

criterion can be described as following inequality:

f (y) − f (x) < T (10)


AC

where T > 0 is the parameter threshold. If inequality 10 is satisfied, solution


255 y will be accepted. It means not only all improving solutions will be accepted,
but those deteriorating solutions, whose cost does not exceed the sum of the
cost of current solution and parameter T , will be accepted also. The parameter

14
ACCEPTED MANUSCRIPT

T , which is used to prevent TA method from trapping into local minimum, is


reduced throughout the search [6].
260 In order to apply the threshold acceptance criterion to HDABC algorithm,
we must specify a parameter controlling strategy to set the initial value of T and

T
how to adjust its value throughout the search. To simplify the implementation

IP
of HDABC algorithm, we use the idea of list-based TA[34],[35]. In list-based
TA, the initial value of threshold and the percentage of the threshold reduc-

CR
265 tion are determined by the algorithm automatically, without the intervention of
the user[20]. Specifically, a list of threshold is created first, and then, in each
generation, the maximum value in the list is used as current threshold T to be

US
used in inequality 10. The threshold list is updated adaptively according to the
topology of the solution space of the problem.
AN
270 3.4. Implementation of HDABC Algorithm

The basic framework of HDABC algorithm is almost the same as ABC al-
gorithm described in algorithm 1. We update the algorithm 1 in two aspects.
M

The first update is that a threshold list will be created for each employed bee.
Algorithm 3 is used to create initial threshold list. As showed in line 4 of algo-
ED

275 rithm 3, normalized threshold value is stored in list. The second update is that
the order of statement 8 and statement 9 are swapped in HDABC algorithm.
This means the selection probability will be recalculated for each onlooker bee.
PT

Because onlooker bee may update employed bee, this strategy is more consistent
with current status of bee colony. Because HDABC algorithm uses threshold
acceptance criterion to accept candidate solution, therefore algorithm 2, which
CE

280

is the solution updating procedure, is replaced with algorithm 4. In algorithm 4,


variable total and counter are used to update threshold list. Variable counter
AC

stores the times bad solution is accepted, and variable total stores the sum of
normalized difference of cost. If there is any bad solution accepted, the average
285 of normalized difference of cost will be used to replace the maximum value in
threshold list.

15
ACCEPTED MANUSCRIPT

Algorithm 3 Create initial threshold list( x, len )


1: while the length of list is less than len do

2: Produce candidate solution y using inverse, insert, or swap operator.


3: Calculate tour length f (y) of the candidate solution y.

T
4: Insert |f (y) − f (x)| /f (x) into list.
5: if y is better than x then

IP
6: x = y.

CR
7: end if
8: Return the created threshold list.
9: end while

4. Behaviors analysis of HDABC algorithm US


In order to observe and analyze the effectiveness of different components of
AN
HDABC algorithm, four kinds of experiments are carried on benchmark TSP
290 instances. In section 4.1, experiments are used to analyze the importance of
the learning strategy in solution updating equation. In section 4.2, experiments
M

are performed to compare the performance of different selection strategies for


onlooker bee under constrains of different computation resource. In section 4.3,
ED

experiments are performed to analyze the necessity of non-greedy acceptance


295 criterion. Finally, in section 4.4, experiments are carried to analyze the con-
tribution of scout bee. Those experiments were carried on BCL380, XQL662,
PT

XIT1083 and DKF3954 instances from VLSI data sets. The best known integer
solution of those instances are 1621, 2513, 3558 and 12538 respectively. The
CE

iteration times of algorithm 1 were 1000 and the population size was 15. The
300 length of threshold list was 300. Percentage error of average solution is used to
compare the performances of HDABC algorithm with different strategies. The
AC

following experiments were run on an Intel Core i7-5600 laptop, with 2.60 GHz
and a RAM of 8 GB. Java was used as the programming language.

16
ACCEPTED MANUSCRIPT

T
Algorithm 4 Search new solution for HDABC(t)
1: T =The maximum threshold value of threshold list.

IP
2: total=0, counter=0.

CR
3: repeat
4: Produce candidate solution y using Eq. 6.
5: Calculate tour length f (y) of the candidate solution y.
6:

7:

8:
if f (y) − f (x) < T ∗ f (x) then
x = y.
if f (y) − f (x) > 0 then
US
AN
9: total = total + (f (y) − f (x))/f (x).
10: counter = counter + 1.
11: end if
M

12: end if
13: until the statements repeat t times
ED

14: if counter > 0 then


15: Delete the maximum value from threshold list.
16: Insert total/counter into threshold list.
PT

17: end if
18: if solution has been improved then
19: Reset limit to 0.
CE

20: else
21: Increase its limit by 1.
AC

22: end if

17
ACCEPTED MANUSCRIPT

Table 1: Performance comparison of HDABC with different learning strategies


Instance Optimal HDABC-1 HDABC-2 HDABC-3 HDABC
BCL380 1621 31.17 5.49 1.81 1.69
XQL662 2513 52.86 5.5 1.73 1.59
XIT1083 3558 84.05 6.2 2.51 2.2

T
DKF3954 12538 179.78 9.45 3.15 2.88

IP
4.1. The Importance of Learning

CR
305 When we use the solution updating Eq. 6 to produce a candidate solution,
the selection of the second item, edge which will be added to current solution,
is guided by other bees and guided by the knowledge of problem at hand. In

US
order to argument the importance of learning from neighbor and learning from
problem, we implement three variants of HDABC algorithms and compare their
AN
310 performances with HDABC. The first variant (HDABC-1) do not use any learn-
ing strategy, it will randomly select an edge from all edges leading by city i. The
second variant (HDABC-2), which only learns from problem, will biasedly select
M

an edge from nearest city list of city i. Those nearer cities will have higher prob-
ability to be selected. The third variant (HDABC-3), which only learns from
315 other bee, will use the corresponding edge of a randomly selected employed bee.
ED

In case the selected edge is in the current solution, an edge will be randomly
selected from all edges leading by city i. Tab. 1 is the simulation results. It
clearly shows that performance of HDABC algorithm without any learning is
PT

far worse than HDABC algorithm with learning. Learning from other bees is
320 superior to learning from problems. Learning both from other bees and from
CE

problem can produce best performance.

4.2. Compare the Selection Strategies of Onlooker


AC

The main purpose of onlooker bees of ABC algorithm is to enhance intensi-


fication ability. To accomplish this task, a suitable selection strategy should be
325 used for onlooker bees to select employed bees. This is not a trial task, because
different selection strategies have different selection pressures. We compare four

18
ACCEPTED MANUSCRIPT

selection strategies i.e., roulette wheel selection, rank selection, tournament se-
lection and disruptive selection in the HDABC algorithm with an aim to find
the basic principle on how to select a suitable selection strategy and on how to
330 select parameters for a selection strategy. Because different selection strategy

T
has different selection pressure and the suitable selection pressure depends on

IP
the computation resource used, a rational hypothesis is that the relative mer-
its among those selection strategies depend on the computation resource used.

CR
To verify this hypothesis, we use two experiments with different computation
335 resource. In the experiment with high computation resource, bee updates so-
lution N/10 times in algorithm 4, where N is city number of TSP instance.

US
In the experiment with low computation resource, bee updates solution N/100
times in algorithm 4. Fig. 2 is the performances comparison of the four selection
strategies. Fig. 2 clearly show that the relative merits among different selection
AN
340 strategies depend on the computation resource used. For example, when low
computation resource is used, disruptive selection has best performance and
roulette wheel selection has worst performance. But when high computation
M

resource is used, roulette wheel selection has best performance and disruptive
selection has worst performance.
ED
PT
CE
AC

Figure 2: Performance comparison of the four selection strategies

19
ACCEPTED MANUSCRIPT

345 To further analyze the relation between performances and selection pres-
sure, we compare the performances of HDABC algorithm using linear ranking
with different selection pressure. As shown above, two different computation
resources are used. The selection pressure parameter sp in Eq. 8 is set from 1

T
to 2 with a step 0.1. Bigger sp has higher selection pressure. Fig. 3 is the per-

IP
350 formances comparison of different selection pressure for linear ranking. Fig. 3
clearly shows that suitable selection pressure depends on the computation re-

CR
source used. For example, when low computation resource is used, high selection
pressure has better performance as showed in Fig. 3 (a). But when high compu-
tation resource is used, low selection pressure has better performance as showed
355 in Fig. 3 (b).
US
AN
M
ED
PT

Figure 3: Performance comparison of different selection pressure for linear ranking


CE

4.3. The Necessity of Threshold Acceptance Criterion

For basic ABC algorithm, which uses greedy acceptance criterion, scout
AC

contributes to the diversification of bee colony and improves the global search
ability for continuous optimization problem. For the TSP, although scout do
360 enhance the diversification of bee colony, this diversification cannot enhance
its global search ability effectively. We compare the performances of HDABC

20
ACCEPTED MANUSCRIPT

Table 2: Performance comparison of HDABC with different acceptance strategies


Acceptance Criterion BCL380 XQL662 XIT1083 DKF3954
Greedy acceptance criterion 7.65 7.93 7.94 7.66
Threshold acceptance criterion 1.69 1.59 2.2 2.88

T
algoritm with greedy acceptance criterion and HDABC algoritm with threshold

IP
acceptance criterion in Tab. 2. Simulation results clearly show that threshold
acceptance criterion is far better than greedy acceptance criterion. A possible

CR
365 reason is that the new produced solution of scout is far worse than other bees,
so it contributes little to guide other bees and contributes little to search better
solutions also.

4.4. The Contribution of Scout US


AN
To further analyze the effectiveness of scout, we compare the performances
370 of HDABC algorithm with different parameter limit. The used limit include
10, 20, 30, 40, and from 50 to 1000 with step 50. Fig. 4 is the simulation
M

results. It shows that low limit is apparently not suitable, because the frequent
usage of scout will lead bees into stagnation. When limit is not less than 100,
simulation results show there is no significant difference among different limit in
ED

375 most cases. There are two reasons that lead to the useless of scout. One is due
to the characteristic of solution updating equation adopted. When the target
solution and the guiding solution have same edge, it will select a different edge
PT

from nearest neighbor list. This strategy can provide some diversification for
HDABC algorithm. Another reason is that, although creating a new solution
CE

380 can enhance the diversification of ABC algorithm, this enhancement contributes
little to improve the global search ability of HDABC algorithm.
AC

21
ACCEPTED MANUSCRIPT

T
IP
CR
US
Figure 4: Comparison of different scout trigger parameter limit for HDABC
AN
5. Competitiveness of HDABC algorithm

In order to observe the competitiveness of HDABC algorithm, HDABC’s


M

performance is compared with some of other ABC-based algorithms (section


385 5.1) and some of the state-of-the-art algorithms (section 5.2) on a large number
ED

of benchmark TSP instances. Unless explicitly explained, in all the following


experiments, the length of threshold list was 300, the maximum generation was
1000, and the swarm size was 30. The best solution found (Best) and percentage
PT

error of average solution (PE) are used to compare the performances of different
390 algorithms. Wilcoxon signed ranks test is used to compare the PE of HDABC
algorithm and other algorithms.
CE

5.1. Compare with Other ABC Algorithms

To observe the performance of HDABC algorithm among ABC-based algo-


AC

rithms, HDABC algorithm is compared with three other ABC-based algorithms,


395 2-opt-ABC[2] algorithm, DABC[18] algorithm, and ACO-ABC[11] algorithm. In
HDABC algorithm, the solution updating times in algorithm 4 was N /10, where
N was the city number of TSP instance.

22
ACCEPTED MANUSCRIPT

Table 3: Performance comparison between 2-opt-ABC and HDABC algorithms


2-opt-ABC HDABC
No. Instance Optimal
Best PE Best PE
1 Eil51 426 426 0.55 426 0.01
2 Berlin52 7542 7542 0.04 7542 0

T
3 Kroa150 26524 26731 2.37 26524 0.14

IP
4 Ts225 126643 127004 0.64 126643 0
5 Rd400 15281 15798 5.16 15354 1.07
6 D657 48912 50653 4.36 49250 1.06

CR
7 Pr1002 259045 269702 5.42 262554 1.67
8 Pr2392 378032 395710 6.13 383792 2.07

US
In the 2-opt-ABC algorithm, the control parameters were set to 40, 2000,
and 100 for population size, maximum generation and limit, respectively. Runs
were repeated 50 times for each problem instance. The best solution and PE
AN
400

were given in Tab. 3. Tab. 3 clearly shows that HDABC algorithm outperform
2-opt-ABC algorithm on all 8 instances. Wilcoxon signed ranks test is used to
compare HDABC algorithm and 2-opt-ABC algorithm. The computed R+ , R− ,
M

and p-value are 36, 0, and 0.012 respectively. It means that HDABC algorithm
405 is significantly better than 2-opt-ABC algorithm.
ED

HDABC algorithm is compared with DABC algorithm and ACO-ABC al-


gorithm on instances where city distances are represented by float numbers. In
DABC algorithm, population size P was calculated by using Eq. 11:
PT

 
N
P = ∗2 (11)
2
CE

where N is the number of cities of the test instance. Maximum generation


was 100000. The scout bee occurrence parameter limit was calculated as P ∗
N ∗ 100. In ACO-ABC algorithm, the maximum number of iterations was
AC

500. The population size P was equal to N . If the N was an odd number,
410 the P was increased by 1. Runs were repeated 20 times for each problem
instance. The best float solution and PE were given in Tab. 4. Wilcoxon signed
ranks test is used to compare HDABC algorithm with DBAC and ACO-ABC
algorithms. For HDABC algorithm and DABC algorithm, the computed R+ ,

23
ACCEPTED MANUSCRIPT

Table 4: Compare HDABC with DABC and ACO-ABC


DABC ACO-ABC HDABC
No. Instance Optimal
Best PE Best PE Best PE
1 Oliver30 423.74 423.74 0 423.74 0 423.74 0
2 Eil51 428.87 428.87 0.28 431.74 3.39 428.87 0.00

T
3 Berlin52 7544.37 7544.37 0 7544.37 0 7544.37 0

IP
4 St70 677.11 677.11 0.55 687.24 3.47 677.11 0.05
5 Eil76 545.39 - - 551.07 2.31 544.37 -0.11
6 Pr76 108159.4 108159.4 0.43 113798.6 6.39 108159.4 0

CR
7 Kroa100 21285.44 21285.44 0.98 22122.75 5.40 21285.44 0.01
8 Eil101 642.31 653.30 2.90 672.71 6.39 640.21 0.15
9 Ch150 6532.58 - - 6641.69 2.21 6530.90 0.22
10
11
Tsp225
A280
3859.00
2586.77
4122.50
2812.68
US
8.07
11.27
4090.54
-
7.74
-
3861.92
2586.77
0.93
0.24
AN
R− , and p-value are 45, 0, and 0.018 respectively. It means that HDABC
415 algorithm is significantly better than DABC algorithm. For HDABC algorithm
and ACO-ABC algorithm, the computed R+ , R− , and p-value are 55, 0, and
M

0.012 respectively. It means that HDABC algorithm is significantly better than


ACO-ABC algorithm also. Furthermore, HDABC algorithm found new best
solutions for Eil76, Eil101, and Ch150.
ED

420 5.2. Compare with Some State-of-the-art Algorithms

In order to observe the competitiveness of HDABC algorithm among meta-


PT

heuristics, HDABC algorithm is compared with several newly published algo-


rithms, such as IDBA[24], ASA-GS[10], and MSA-IBS[36] algorithms. In HD-
CE

ABC algorithm, the solution updating times in algorithm 4 was N /2.


425 HDABC algorithm is compared with IDBA algorithm on 22 TSP instances.
To compare their convergence behaviour, we also list the results of HDABC al-
AC

gorithm using greedy acceptance criterion (HDABC-G). In the IDBA algorithm,


number of bats was 50, and all the tests were performed on an Intel Core i5-2410
laptop, with 2.30 GHz and a RAM of 4 GB. Runs were repeated 20 times for
430 each problem instance. The best solution and PE were given in Tab. 5. In the

24
ACCEPTED MANUSCRIPT

table, optimal tour lengths of the instances and the results of IDBA algorithm
are taken from [24]. Among the 22 instances, HDABC algorithm can find the
optimal solution on 19 instances, and can always find the optimal solution on
14 instances. The average PE of HDABC algorithm, HDABC-G algorithm, and

T
435 IDBA algorithm are 0.07, 1.16, and 1.83 respectively. Wilcoxon signed ranks test

IP
is used to compare HDABC algorithm and IDBA algorithm. The computed R+ ,
R− , and p-value are 253, 0, and 8.84 ∗ 10−5 respectively. It means that HDABC

CR
algorithm is significantly better than IDBA algorithm. Wilcoxon signed ranks
tests also show that HDABC algorithm significantly outperforms HDABC-G
440 algorithm and HDABC-G algorithm significantly outperforms IDBA algorithm.

US
To compare the convergence behaviour of HDABC algorithm, HDABC-G al-
gorithm, and IDBA algorithm, the average number of objective function eval-
uations (Eval) needed to reach the final solution (in thousands), the average
AN
running time (in seconds), and the average iterations (Iter) needed to reach the
445 final solution for HDABC-G algorithm and HDABC algorithm were given in
Tab. 6. Tab. 6 shows that the convergence behaviour of HDABC algorithm is
M

quite different from IDBA algorithm and HDABC-G algorithm. IDBA algo-
rithm and HDABC-G algorithm, which both use greedy acceptance criterion,
ED

can have quicker convergence speed than HDABC algorithm in general. But
450 they may be easier to be strapped into local minimum than HDABC algorithm.
HDABC algorithm, which uses threshold acceptance criterion, can control its
PT

convergence speed more elegantly.


We compare HDABC algorithm with ASA-GS[10] algorithm and MSA-IBS[36]
CE

algorithm on 40 benchmark instances with cities from 150 to 85900. ASA-GS


455 algorithm was carried out in C++ and run on a 2.83GHz PC with 2GB of RAM.
In MSA-IBS algorithm and HDABC algorithm, suitable parameters were used
AC

for each instance such that the CPU time of those algorithms is less than that of
ASA-GS algorithm. As ASA-GS algorithm, HDABC algorithm and MSA-IBS
algorithm were executed 5 trials on each instance, and the results are listed in
460 Tab. 7. As can be seen in Tab. 7, the average PE of HDABC algorithm for
all instances is 0.69, which is better than 1.87 of ASA-GS algorithm, and is

25
ACCEPTED MANUSCRIPT

T
IP
Table 5: Performance comparison among IDBA, HDABC-G, and HDABC algorithms
IDBA HDBAC-G HDABC

CR
No. Instance Optimal
Best PE Best PE Best PE
1 Oliver30 420 420 0 420 0 420 0
2 Eilon50 425 425 0.56 425 0.73 425 0

US
3 Eil51 426 426 0.49 426 0.54 426 0
4 Berlin52 7542 7542 0 7542 0.21 7542 0
5 St70 675 675 0.6 675 0.39 675 0
6 Eilon75 535 535 2.27 537 1.65 535 0
AN
7 Eil76 538 539 1.84 544 1.95 538 0
8 KroA100 21282 21282 0.76 21282 0.18 21282 0
9 KroB100 22140 22140 1.63 22199 0.98 22141 0.13
10 KroC100 20749 20749 1.43 20749 0.37 20749 0
M

11 KroD100 21294 21294 1.39 21374 0.94 21294 0


12 KroE100 22068 22068 1.26 22073 0.52 22068 0.2
13 Eil101 629 634 2.69 634 2.15 629 0.05
ED

14 Pr107 44303 44303 1.1 44347 0.44 44303 0.1


15 Pr124 59030 59030 0.64 59030 0.24 59030 0
16 Pr136 96772 97547 2.6 97350 2.06 96772 0
PT

17 Pr144 58537 58537 0.58 58537 0.08 58537 0.02


18 Pr152 73682 73921 1.33 73682 0.36 73682 0
19 Pr264 49135 49756 3.48 49361 1.86 49135 0
CE

20 Pr299 48191 48310 2.99 48743 2.47 48191 0.06


21 Pr439 107217 111538 6.98 108523 2.62 107277 0.24
22 Pr1002 259047 270016 5.6 269267 4.83 260372 0.8
AC

26
ACCEPTED MANUSCRIPT

T
IP
Table 6: Convergence comparison among IDBA, HDABC-G, and HDABC algorithms
IDBA HDBAC-G HDABC

CR
No.
Eval Time Eval Time Iter Eval Time Iter
1 2.17 0.4 14.13 0.13 15.7 211.68 0.15 235.2
2 22.8 1.5 82.5 0.2 55 593.63 0.24 395.75

US
3 15.37 1.7 71.68 0.21 46.85 610.09 0.24 398.75
4 20.07 2.1 45.47 0.21 29.15 474.79 0.25 304.35
5 72.67 3.9 173.25 0.3 82.5 1135.16 0.34 540.55
6 116.56 4.5 177.64 0.33 78.95 948.38 0.4 421.5
AN
7 91.53 5.1 217.06 0.32 95.2 926.59 0.45 406.4
8 739.86 10.6 350.55 0.47 116.85 1634.4 0.51 544.8
9 461.05 11.1 1101.3 0.48 367.1 2004 0.51 668
10 872.51 12.0 417.9 0.47 139.3 1412.25 0.52 470.75
M

11 600.31 11.7 478.5 0.53 159.5 1660.65 0.53 553.55


12 602.94 11.4 541.35 0.44 180.45 1943.1 0.5 647.7
13 512.73 13.1 279.06 0.41 92.1 1686.2 0.51 556.5
ED

14 679.07 12.1 1074.71 0.44 334.8 1874 0.56 583.8


15 1602.51 18.5 317.5 0.54 85.35 1817.41 0.64 488.55
16 2866.60 23.4 669.53 0.59 164.1 2841.92 0.71 696.55
PT

17 4361.11 30.3 1377.65 0.63 318.9 3584.74 0.84 829.8


18 4853.19 31.0 766.99 0.67 168.2 2642.98 0.84 579.6
19 6375.46 92.5 3933.47 1.27 496.65 4428.07 1.63 559.1
CE

20 6597.94 147.2 2695.49 1.43 300.5 8042.95 1.96 896.65


21 8346.85 201.9 5804.68 2.25 440.75 12605.67 3.28 957.15
22 12103.73 681.7 14290.52 6.9 475.4 29570.02 14.33 983.7
AC

27
ACCEPTED MANUSCRIPT

better than 0.86 of MSA-IBS algorithm. Wilcoxon signed ranks test is used
to compare HDABC algorithm, MSA-IBS algorithm and ASA-GS algorithm.
For HDABC algorithm and ASA-GS algorithm, the computed R+ , R− , and
465 p-value are 817, 3, and 1.46 ∗ 10−7 respectively. It means that HDABC algo-

T
rithm is significantly better than ASA-GS algorithm. For HDABC algorithm

IP
and MSA-IBS algorithm, the computed R+ , R− , and p-value are 686, 134, and
0.005 respectively. It means that HDABC algorithm is significantly better than

CR
MSA-IBS algorithm also.

470 6. Conclusions

US
ABC algorithm is a smart swarm intelligence algorithm which has splendid
collective behaviors and is first proposed for continuous optimization problems.
AN
Even though ABC algorithm is especially simple and easy to implement, it
is non-trivial to use it for new applications, especially for combinatorial opti-
475 mization problems. Suitable strategies should be used to guarantee that the
M

designed solution updating equation must keep the excellent features and the
integrated effect of selection strategy, acceptance strategy, and scout trigger
parameter limit must obtain good balance between intensification and diver-
ED

sification for ABC algorithm. Guided by those principles, this paper presents
480 a hybrid discrete ABC algorithm with threshold acceptance criterion for the
PT

TSP. Experiment results show that the new solution updating equation of HD-
ABC algorithm, which can learn both from other bees and from problem at
hand, can search the solution space of the TSP intelligently and efficiently. Due
CE

to the character of discrete optimization problem, scout itself is not effective


485 enough to guide HDABC algorithm from premature convergence. Threshold
acceptance criterion can effectively get better balance between intensification
AC

and diversification. The selection of a suitable selection strategy for onlooker


bees is mainly depended on the computation resources algorithm can provide.
If much computation resource is provided, HDABC algorithm can use selection
490 schemes with low selection pressure, otherwise, HDABC algorithm should use

28
ACCEPTED MANUSCRIPT

Table 7: Compare HDABC with ASA-GS and MSA-IBS on 40 benchmark instances


ASA-GS MSA-IBS HDABC
No. Instance Optimal
PE Time PE Time PE Time
1 Ch150 6528 0.16 10.91 0.41 2.95 0.31 1.86
2 Kroa150 26524 0.05 10.9 0.05 2.94 0.05 1.88

T
3 Krob150 26130 0.18 10.9 -0.01 2.95 -0.01 1.82
4 Pr152 73682 0.01 10.85 0 3.04 0 1.89

IP
5 U159 42080 0.75 11.49 -0.01 3.12 -0.01 2.01
6 Rat195 2323 1.07 14.37 0.6 3.4 0.61 2.18

CR
7 D198 15780 0.41 14.6 0.24 3.58 0.27 2.28
8 Kroa200 29368 0.23 14.26 0 3.56 0.05 2.4
9 Krob200 29437 0.25 14.24 0.03 3.55 0.02 2.31
10 Ts225 126643 0 16.05 0 4.1 0 2.78
11 Pr226 80369 0.39 16.7 0 4.33 0 2.84
12
13
14
15
Gil262
Pr264
Pr299
Lin318
2378
49135
48191
42029
0.86
0
0.28
0.84
US
19.43
19.09
21.94
23.35
0.42
0.18
0.04
0.18
4.86
4.31
4.91
4.36
0.38
0
0.11
0.26
3.42
2.85
3.7
3.52
AN
16 Rd400 15281 0.97 30.4 0.2 6.04 0.26 4.98
17 Fl417 11861 1.54 32.02 1.15 7.08 1.01 5.61
18 Pr439 107217 2.8 34.92 0.36 6.98 0.22 5.68
19 Pcb442 50778 0.96 35.75 0.08 6.93 0.15 5.93
M

20 U574 36905 1.25 48.47 1.03 9.34 0.37 8.85


21 Rat575 6773 1.94 52.1 1.23 8.9 0.75 8.83
22 U724 41910 1.33 66.83 0.8 12.9 0.33 13.43
23 Rat783 8806 2 78.9 1.31 14.25 0.91 15.29
ED

24 Pr1002 259045 2.01 164.42 1.16 11.65 0.71 11.19


25 Pcb1173 56892 1.63 193.08 1.07 14.23 0.77 14.67
26 D1291 50801 2.85 214.64 2.12 16.24 1.64 14.32
27 Rl1323 270199 1.2 210.16 0.98 17.11 0.5 15.29
PT

28 Fl1400 20127 3.25 232.02 2.07 27.36 1.29 18.16


29 D1655 62128 3.26 281.88 2.11 21.84 1.28 21.28
30 Vm1748 336556 2.18 276.98 0.7 26.93 0.72 25.21
31 U2319 234256 1.06 410.97 0.32 22.02 0.26 25.54
CE

32 Pcb3038 137694 2.57 554.28 1.05 36.14 1.03 40.42


33 Fnl4461 182566 2.65 830.9 1.23 35.53 1.3 44.21
34 Rl5934 556045 3.48 1044.0 1.76 52.89 1.79 63.74
35 Pla7397 23260728 3.89 1245.2 2.03 146.23 1.47 108.34
AC

36 Usa13509 19982859 4.14 2016.1 1.43 495.69 1.57 434.04


37 Brd14051 468385 3.80 2080.5 1.61 504.41 1.67 452.87
38 D18512 645238 3.75 2594.0 1.38 774.24 1.51 816.38
39 Pla33810 66048945 5.27 4199.9 2.45 2388.0 1.81 2318.8
40 Pla85900 142382641 9.63 8855.1 2.72 5641.0 2.23 5729.0
Average 1.87 650.31 0.86 259.00 0.69 256.50

29
ACCEPTED MANUSCRIPT

selection schemes with high selection pressure. Simulation results confirm the
competitiveness of HDABC algorithm. Furthermore, the designing principles
and the analysis procedure of proposed HDABC algorithm can be used to guide
the design and implementation of other swarm intelligence algorithm for discrete

T
495 optimization problems.

IP
Acknowledgement

CR
This work was supported by the Nature Science Foundation of Fujian Province
of P. R. China under Grant No. 2014J01219, No. 2015J01233, No. 2016J01280,the
Major Projects of Regional Development of Fujian Province of P. R. China un-
500
US
der Grant No. 2015N3011, and the Special Fund for Scientific and Technolog-
ical Innovation of Fujian Agriculture and Forestry University under Grant No.
AN
CXZX2016026, No. CXZX2016031.

References
M

[1] M. N. Ab Wahab, S. Nefti-Meziani, A. Atyabi, A comprehensive review of


505 swarm optimization algorithms, PloS one 10 (5) (2015) e0122827.
ED

[2] B. Akay, E. Aydogan, L. Karacan, 2-opt based artificial bee colony algo-
rithm for solving traveling salesman problem, in: 2nd World Conference on
PT

Information Technology (WCIT-2011), vol. 1, 2012.

[3] M. Akhand, S. Akter, M. A. Rashid, S. Yaakob, Velocity tentative pso: An


CE

510 optimal velocity implementation based particle swarm optimization to solve


traveling salesman problem, IAENG International Journal of Computer
Science 42 (3) (2015) 221–232.
AC

[4] L. Bao, J.-c. Zeng, Comparison and analysis of the selection mechanism in
the artificial bee colony algorithm, in: Hybrid Intelligent Systems, 2009.
515 HIS’09. Ninth International Conference on, vol. 1, IEEE, 2009.

30
ACCEPTED MANUSCRIPT

[5] B. Basturk, D. Karaboga, An artificial bee colony (abc) algorithm for


numeric function optimization, in: IEEE swarm intelligence symposium,
vol. 8, 2006.

[6] G. Dueck, T. Scheuer, Threshold accepting: A general purpose optimiza-

T
520 tion algorithm appearing superior to simulated annealing, Journal of com-

IP
putational physics 90 (1) (1990) 161–175.

CR
[7] J. B. Escario, J. F. Jimenez, J. M. Giron-Sierra, Ant colony extended:
experiments on the travelling salesman problem, Expert Systems with Ap-
plications 42 (1) (2015) 390–410.

525

US
[8] K. Z. Gao, P. N. Suganthan, Q. K. Pan, T. J. Chua, C. S. Chong, T. X. Cai,
An improved artificial bee colony algorithm for flexible job-shop scheduling
problem with fuzzy processing time, Expert Systems with Applications 65
AN
(2016) 52–67.

[9] K. Z. Gao, P. N. Suganthan, Q. K. Pan, M. F. Tasgetiren, A. Sadollah,


M

530 Artificial bee colony algorithm for scheduling and rescheduling fuzzy flexible
job shop problem with new job insertion, Knowledge-Based Systems 109
(2016) 1–16.
ED

[10] X. Geng, Z. Chen, W. Yang, D. Shi, K. Zhao, Solving the traveling salesman
problem based on an adaptive simulated annealing algorithm with greedy
PT

535 search, Applied Soft Computing 11 (4) (2011) 3680–3689.

[11] M. Gündüz, M. S. Kiran, E. Özceylan, A hierarchic approach based on


CE

swarm intelligence to solve the traveling salesman problem, Turkish Journal


of Electrical Engineering & Computer Sciences 23 (1) (2015) 103–117.
AC

[12] E. Hancer, B. Xue, D. Karaboga, M. Zhang, A binary abc algorithm based


540 on advanced similarity scheme for feature selection, Applied Soft Comput-
ing 36 (2015) 334–348.

[13] H. Ismkhan, Effective heuristics for ant colony optimization to handle large-
scale problems, Swarm and Evolutionary Computation 32 (2017) 140–149.

31
ACCEPTED MANUSCRIPT

[14] D. Karaboga, An idea based on honey bee swarm for numerical optimiza-
545 tion, Tech. rep., Technical report-tr06, Erciyes university, engineering fac-
ulty, computer engineering department (2005).

[15] D. Karaboga, B. Gorkemli, C. Ozturk, N. Karaboga, A comprehensive

T
survey: artificial bee colony (abc) algorithm and applications, Artificial

IP
Intelligence Review 42 (1) (2014) 21–57.

CR
550 [16] D. Karaboga, S. Okdem, C. Ozturk, Cluster based wireless sensor net-
work routing using artificial bee colony algorithm, Wireless Networks 18 (7)
(2012) 847–860.

US
[17] D. Karaboga, C. Ozturk, N. Karaboga, B. Gorkemli, Artificial bee colony
programming for symbolic regression, Information Sciences 209 (2012) 1–
AN
555 15.

[18] M. S. Kıran, H. İşcan, M. Gündüz, The analysis of discrete artificial bee


colony algorithm with neighborhood operator on traveling salesman prob-
M

lem, Neural computing and applications 23 (1) (2013) 9–21.

[19] H. E. Kocer, M. R. Akca, An improved artificial bee colony algorithm with


ED

560 local search for traveling salesman problem, Cybernetics and Systems 45 (8)
(2014) 635–649.
PT

[20] D. S. Lee, V. S. Vassiliadis, J. M. Park, List-based threshold-accepting


algorithm for zero-wait scheduling of multiproduct batch plants, Industrial
& engineering chemistry research 41 (25) (2002) 6579–6588.
CE

565 [21] Z. Li, Z. Zhou, X. Sun, D. Guo, Comparative study of artificial bee colony
algorithms with heuristic swap operators for traveling salesman problem,
AC

in: International Conference on Intelligent Computing, Springer, 2013.

[22] M. Mahi, Ö. K. Baykan, H. Kodaz, A new hybrid method based on parti-
cle swarm optimization, ant colony optimization and 3-opt algorithms for
570 traveling salesman problem, Applied Soft Computing 30 (2015) 484–490.

32
ACCEPTED MANUSCRIPT

[23] J. B. Odili, M. N. Mohmad Kahar, Solving the traveling salesman’s prob-


lem using the african buffalo optimization, Computational intelligence and
neuroscience 2016 (2016) 3.

[24] E. Osaba, X.-S. Yang, F. Diaz, P. Lopez-Garcia, R. Carballedo, An im-

T
575 proved discrete bat algorithm for symmetric and asymmetric traveling

IP
salesman problems, Engineering Applications of Artificial Intelligence 48
(2016) 59–71.

CR
[25] A. Ouaarab, B. Ahiod, X.-S. Yang, Discrete cuckoo search algorithm for the
travelling salesman problem, Neural Computing and Applications 24 (7-8)
580 (2014) 1659–1669.
US
[26] C. Ozturk, E. Hancer, D. Karaboga, Dynamic clustering with improved
AN
binary artificial bee colony algorithm, Applied Soft Computing 28 (2015)
69–80.

[27] C. Ozturk, E. Hancer, D. Karaboga, A novel binary artificial bee colony


M

585 algorithm based on genetic operators, Information Sciences 297 (2015) 154–
170.
ED

[28] P. V. Paul, N. Moganarangan, S. S. Kumar, R. Raju, T. Vengattaraman,


P. Dhavachelvan, Performance analyses over population seeding techniques
of the permutation-coded genetic algorithm: An empirical study based on
PT

590 traveling salesman problems, Applied Soft Computing 32 (2015) 383–402.

[29] S. Sabet, M. Shokouhifar, F. Farokhi, A hybrid mutation-based artificial


CE

bee colony for traveling salesman problem, in: 4th International Conference
on Electronics Computer Technology (ICECT), April 2012, 2013.
AC

[30] Y. Saji, M. E. Riffi, A novel discrete bat algorithm for solving the travelling
595 salesman problem, Neural Computing and Applications 27 (7) (2016) 1853–
1866.

33
ACCEPTED MANUSCRIPT

[31] M. Saraei, R. Analouei, P. Mansouri, Solving of travelling salesman problem


using firefly algorithm with greedy approach, Cumhuriyet Science Journal
36 (6) (2015) 267–273.

[32] P. Shi, S. Jia, A hybrid artificial bee colony algorithm combined with simu-

T
600

lated annealing algorithm for traveling salesman problem, in: Information

IP
Science and Cloud Computing Companion (ISCC-C), 2013 International
Conference on, IEEE, 2013.

CR
[33] M. Shokouhifar, A. Jalali, H. Torfehnejad, Optimal routing in traveling
605 salesman problem using artificial bee colony and simulated annealing, in:

US
1st National Road ITS Congress, 2015.

[34] C. Tarantilis, C. Kiranoudis, A list-based threshold accepting method for


AN
job shop scheduling problems, International Journal of Production Eco-
nomics 77 (2) (2002) 159–171.

610 [35] C. Tarantilis, C. Kiranoudis, V. Vassiliadis, A list based threshold accepting


M

metaheuristic for the heterogeneous fixed fleet vehicle routing problem,


Journal of the Operational Research Society 54 (1) (2003) 65–71.
ED

[36] C. Wang, M. Lin, Y. Zhong, H. Zhang, Solving travelling salesman problem


using multiagent simulated annealing algorithm with instance-based sam-
615 pling, International Journal of Computing Science and Mathematics 6 (4)
PT

(2015) 336–353.

[37] J. Wang, O. K. Ersoy, M. He, F. Wang, Multi-offspring genetic algorithm


CE

and its application to the traveling salesman problem, Applied Soft Com-
puting 43 (2016) 415–423.
AC

620 [38] L. Wang, G. Zhou, Y. Xu, S. Wang, M. Liu, An effective artificial bee colony
algorithm for the flexible job-shop scheduling problem, The International
Journal of Advanced Manufacturing Technology 60 (1-4) (2012) 303–315.

34
ACCEPTED MANUSCRIPT

[39] X. Xu, X. Lei, Multiple sequence alignment based on abc sa, in: Interna-
tional Conference on Artificial Intelligence and Computational Intelligence,
625 Springer, 2010.

[40] Z. Xu, Y. Wang, S. Li, Y. Liu, Y. Todo, S. Gao, Immune algorithm com-

T
bined with estimation of distribution for traveling salesman problem, IEEJ

IP
Transactions on Electrical and Electronic Engineering 11 (S1) (2016) S142–
S154.

CR
630 [41] W. Yong, Hybrid max–min ant system with four vertices and three lines
inequality for traveling salesman problem, Soft Computing 19 (3) (2015)
585–596.
US
[42] Y. Zhou, Q. Luo, H. Chen, A. He, J. Wu, A discrete invasive weed optimiza-
AN
tion algorithm for solving traveling salesman problem, Neurocomputing 151
635 (2015) 1227–1236.

[43] Y. Zhou, X. Ouyang, J. Xie, A discrete cuckoo search algorithm for travel-
M

ling salesman problem, International Journal of Collaborative Intelligence


1 (1) (2014) 68–84.
ED
PT
CE
AC

35

You might also like