You are on page 1of 17

Applied Soft Computing 64 (2018) 564–580

Contents lists available at ScienceDirect

Applied Soft Computing


journal homepage: www.elsevier.com/locate/asoc

A comparative study of improved GA and PSO in solving multiple


traveling salesmen problem
Honglu Zhou a , Mingli Song a,∗ , Witold Pedrycz b,c,d
a
School of Computer Science, Communication University of China, Beijing, China
b
Department of Electrical and Computer Engineering, University of Alberta, Edmonton T6R 2V4, Canada
c
Department of Electrical and Computer Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah 21589, Saudi Arabia
d
Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland

a r t i c l e i n f o a b s t r a c t

Article history: Multiple traveling salesman problem (MTSP) is a generalization of the classic traveling salesman prob-
Received 10 September 2016 lem (TSP). Compared to TSP, MTSP is more common in real-life applications. In this paper, in order to
Received in revised form solve the minsum MTSP with multiple depots, closed path, and the requirement of minimum number
20 December 2017
of cities each salesman should visit, we propose two partheno genetic algorithms (PGA). One is a PGA
Accepted 21 December 2017
with roulette selection and elitist selection in which four new kinds of mutation operation are proposed.
Available online 27 December 2017
The other one, named IPGA, binds the selection and mutation together. A new selection operator and a
more comprehensive mutation operator are used. The new mutation operator is based on the four kinds
Keywords:
Partheno genetic algorithm
of mutation operation in PGA, which eliminates the mutation probability. For comparative analysis, we
Multiple traveling salesman problem also adopt particle swarm optimization algorithm (PSO) and one state of the art method (invasive weed
Particle swarm optimization optimization algorithm from literature) to solve MTSP. The algorithms are validated with publicly avail-
able TSPLIB benchmarks. The performance is discussed and evaluated through a series of comparative
experiments. IPGA is demonstrated to be superior in solving MTSP.
© 2017 Elsevier B.V. All rights reserved.

1. Introduction MTSP considered here is the symmetric minsum MTSP with


multiple depots and closed paths. The restriction of a minimum
Traveling salesman problem is a classic NP-Complete problem number of cities is added. MTSP with the objective of minsum is a
encountered in combinatorial optimization [1]. The objective is to problem that minimizing the sum of the lengths of all tours for mul-
find the salesman’s route with the minimum cost, under the condi- tiple salesmen. In this case, each intermediate city is visited exactly
tion that this salesman visit all given locations only once, and at the once. MTSP can be generalized in many different ways [62]. It has
end, return to the starting location. Many real-world problems can two categories: the single depot case and the multiple depots case.
be modeled as TSP after transformation, such as route planning, In the single depot case, all salesmen start and end in the same city.
production scheduling, network communication, et al. [49–54]. In the multiple depots case, the routes can be either closed paths
However, classic TSP does not conform to certain circumstances. or open paths. If the routes are closed paths, the salesmen have dif-
For instance, suppose that a company has multiple salesmen living ferent origin cities and return to their origin cities after completing
in different cities, and the company hopes that they visit cities in the the tours. If the routes are open paths, the salesmen have different
number of not less than a certain amount as to meet their minimum origin cities and they do not go back to their origin cities at the end
wage. Therefore, to address this problem, a multiple traveling sales- of their tours. We investigate the MTSP with multiple depots and
man problem was introduced [62]. By changing the single salesman closed paths. Thus all salesmen start their routes in different cities,
condition into a problem involving several salesmen, MTSP adds and end their routes in their own origin cities. Apart from that, the
some additional conditions to satisfy more realistic requirements minimum number of cities that each salesman must go through is
[4]. In this sense, the MTSP is more suitable for real-life applications. also a required restriction. Each salesman has to visit a minimum
number of cities.
The MTSP we considered can be concisely described as follows:
Give an undirected graph G = (V, A), which is an ordered pair
∗ Corresponding author.
G = (V, A) comprising a set V of vertices, together with a set A of arcs.
E-mail addresses: hlzhou@cuc.edu.cn (H. Zhou), songmingli@cuc.edu.cn
The undirected graph G has nonnegative costs associated with its
(M. Song), wpedrycz@ualberta.ca (W. Pedrycz).

https://doi.org/10.1016/j.asoc.2017.12.031
1568-4946/© 2017 Elsevier B.V. All rights reserved.
H. Zhou et al. / Applied Soft Computing 64 (2018) 564–580 565

arcs. Let n represent the total number of salesmen. The objective i.e., FlipInsert, SwapInsert, LSlideInsert and RSlideInsert. Based on
 n
function is to partition V into n nonempty subsets Si , and find these mutation operations, the IPGA binds the selection and muta-
i=1
a minimum cost circuit passing through each vertex of each subset tion together and eliminates the mutation probability. By doing
Si exactly once. so, the easily-occurring phenomena of premature convergence and
The MTSP objective function is as follows: poor convergence ability are effectively prevented. Third, we adopt
  PSO method to solve MTSP. PSO is a relatively new evolutionary
i −1
algorithm. It is able to solve complex problems, which has sim-
Minimize ni=1 xi i + m
j=1
i
xj,j+1 (1) ple principle and implementation process. Recently PSO has been
m ,1
successfully applied to many areas. Several papers on PSO have
stressed their advantages and claimed the superiority of PSO over
The first sum represents cycling through the n salesmen. The second
GA [45]. However, few studies were reported on using PSO to solve
sum represents cycling through the total cities that the ith salesman
the MTSP and only a few papers consider using PSO for the classical
has visited (the index of the first city that the ith salesman has
i TSP. We design new PSO formulas for the MTSP by defining three
visited is 1, and the last city index is mi ). xj,j+1 indicates the distance
new operators. We have a deep experience that PSO is suitable for
between city j and the next city that the ith salesman visited. xi solving continuous problems, and GA is superior for solving dis-
mi ,1
indicates the distance between the last city mi and the first city crete problems. PSO has the characteristic of high search efficiency
i
that the ith salesman visited. xj,j+1 i
is equal to xj+1,j . The value of mi in the earlier state. However, it has a slower convergence rate in the
should not be less than the specified minimum number of cities for latter period of searching. Fourth, we put forward a series of com-
n parative experiments. We not only analyze the performance and
each salesman.  mi equals to the total number of cities (cycling
i=1 efficiency of our three algorithms, compare their computational
through the nth salesmen). results with the benchmark results and other existing method, but
Genetic algorithm (GA) is an evolutionary algorithm which fol- also, we study how the number of salesmen and the number of cities
lows the principles of the selection of the fittest [5,6]. Since GA has will affect the solution. It turns out that how to choose a rational
been introduced, it has been widely used, especially in the field number of salesmen is worth considering.
of pattern recognition, image processing, machine learning, neu- The paper is organized as follows. Section 2 presents a literature
ral networks, adaptive control, artificial life, function optimization review and summarizes the related works on MTSP. Sections 3, 4
and production scheduling [7–14]. Particle swarm optimization and 5 present PGA, IPGA and PSO methods. Section 6 is concerned
algorithm is a popular swarm inspired method present in computa- with experimental studies. Section 7 summarizes the paper, and
tional intelligence. It is a population based stochastic optimization puts forward some ideas for future studies.
technique [20,21,40]. Its underlying idea is to find the optimal
solution through information sharing and individual collabora-
tion. PSO has been widely used in network traffic management, 2. Related studies
distributed control of collective robotics, function optimization,
neural network training, fuzzy system control, intrusion activities Compared with the well-known TSP, the MTSP is much less stud-
detection, granular computing and decision-making scheduling ied and does not receive the same amount of research effort. As
[22–27,59,60]. researchers dived into the problem, the MTSP has received more
This paper aims to use the improved GA and PSO methods to attention recently. Many efficacious methods have been developed.
solve the symmetric minsum MTSP with multiple depots, closed An overview of literature and algorithms of MTSP, as well as prob-
paths (with a minimum city number restriction). Sequence encod- lem definition and formulations have been presented in [62–86].
ing method is the most suitable method and is often used to solve To solve MTSP, Russell presented a heuristic in 1977, which
MTSP. However, this causes problems encountered in both the tra- forms an extension of the highly successful techniques proposed by
ditional GA and the standard PSO. Hence, we choose PGA instead Lin and Kernighan [109]. In 1980, Laporte et al. provided an exact
of classic GA [34,35]. It uses roulette selection and elitist selection, algorithm using integer linear programming [63]. In 1986, Gavish
and has special mutation operators. Considering some drawbacks et al. developed an efficient branch-and-bound based method for
of PGA, an improved PGA namely IPGA, is proposed. It optimizes the solving the large-scale MTSP [64]. In 1989, Wacholder et al. devel-
genetic operation of PGA and further improves the search abilities, oped a neural network algorithm [96]. The model is an extension of
avoiding the shortcomings associated with parameter setting and the Hopfield and Tank approach incorporating the Basic Differential
making the original algorithm more efficient and stable. We also Multipliers Method. Potvin et al. presented a technique that gen-
adopt the PSO algorithm to solve MTSP and carry out a compara- eralizes the classical k-opt exchange procedure [97]. In 1991, Hsu
tive analysis [44]. To compare our methods with other state-of-art et al. proposed a neural network approach which based on the self-
approaches, we implement the invasive weed optimization algo- organized feature map model [98]. In 1995, França et al. developed
rithm (IWO) with local search [79]. The experimental studies show a tabu search heuristic and two exact search schemes for the min-
that our results are much better than results obtained by IWO. max MTSP, in which the objective is to minimize the length of the
The originality of this study is as follows. First, we investigate longest route [99]. In 1999, Somhom et al. proposed a competition-
the multiple depots and closed paths MTSP with a minimum city based adaptive neural network algorithm for the minmax MTSP
number. The complexity of the MTSP is higher than the TSP. How- [100]. They evaluated the algorithms by using the standard data of
ever, the research effort on MTSP is limited while there is a great VRP in TSPLIB without consideration of capacity constraint.
amount of literature for solving TSP. On the one hand, compared to Many new approaches have been established. In 2003, Song
the well-known TSP, the research on MTSP is quite limited. On the et al. proposed an extended simulated annealing, based on grand
other hand, the MTSP has many variations in which there are few canonical ensemble [101]. In 2004, Bektas did a survey to review
studies on the type of MTSP we considered. To our knowledge so the MTSP and its practical applications, highlighted some formula-
far, there has been no specific method proposed for our researched tions, and described exact and heuristic solution procedures [62].
problem. We report our test results using the TSPLIB benchmark for In 2005, Bektas et al. extended the classical MTSP by imposing a
future references in the experimental studies (see Section 6.1). Sec- minimal number of cities that a salesman must visit as a side con-
ond, we propose two partheno genetic algorithms (PGA and IPGA). dition (which is an additional restriction we also considered) [65].
In particular, we introduce four new kinds of mutation operation, They proposed integer linear programming formulations for both
566 H. Zhou et al. / Applied Soft Computing 64 (2018) 564–580

single depot and multiple depots cases. In the multiple depots case multi-robot coordination technique, called move and improve, to
they considered for experiment, the salesmen do not have to return the multiple depots MTSP [85]. In each step of the algorithm, a robot
to their origin depots but the number of salesmen at each depot moves and attempts to improve its solution by coordination with its
should remain the same at the end as it was in the beginning. neighbor robots. Larki et al. presented an evolutionary optimization
In 2006, Carter et al. proposed a two-part GA chromosome and algorithm through combining Modified Imperialist Competitive
related operators to model the MTSP. Their method can dramat- Algorithm and Lin-Kernigan Algorithm, in which an absorption
ically reduce the number of redundant solutions. They considered function and several local search algorithms are used [86]. In 2015,
two types of MTSP with single depots and closed paths. The first Rostami et al. modified the GELS algorithm in [84]. It is based on
one is minsum MTSP, in which there are no constraints associated the local search concept and uses two main parameters in physics,
with the maximum number of cities visits by any one salesman. velocity and gravity [57]. Venkatesh et al. proposed two meta-
The second MTSP minimizes the longest route among the sales- heuristic approaches for the MTSP [79]. The first approach is based
men [69]. Chandran et al. proposed a clustering approach for the on artificial bee colony algorithm, whereas the second approach is
MTSP with the criterion to balance workloads amongst salesmen based on invasive weed optimization algorithm. They also applied a
[71]. In 2007, Brown et al. focused on the application of a grouping local search to further improve the solution. Soylu proposed a gen-
GA [73]. They introduced a new chromosome representation for the eral variable neighborhood search approach [75], and applied the
MTSP. In 2008, Singh et al. developed a new steady-state grouping proposed algorithm for the traffic signalization network in the Kay-
GA (GGA-SS) [74]. They proposed a chromosome representation seri province of Turkey. Bolaños et al. considered a multi-objective
scheme with the least possible redundancy. The GGA-SS outper- MTSP [70]. A non-dominated sorting genetic algorithm is proposed
formed the approaches presented in [73]. Zhao et al. designed a by considering the concept of dominance. Sundar et al. presented an
pheromone-based crossover operator and used a local search pro- exact algorithm for the heterogeneous multiple depots MTSP [68].
cedure to act as the mutation operator in a GA [102]. In 2009, Liu An integer linear programming formulation including two classes
et al. proposed an ant colony optimization (ACO) algorithm with the of valid inequalities was proposed. A customized branch-and-cut
pheromone trail updating, and a local search procedure [103]. Ober- algorithm was also developed using the proposed formulation. Nec-
lin et al. presented a transformation of a heterogeneous multiple ula et al. investigated three multi-objective ACO based algorithms
depots MTSP into a single asymmetric TSP, so algorithms available and a single-objective ACO algorithm with the minmax objective
for the TSP can be used for the MTSP [104]. for the bi-criteria MTSP [66]. Alves et al. presented the development
MTSP has received much more attention in recent years. In 2010, of GA to solve MTSP with the objective of reducing both the overall
Király et al. proposed an easily interpretable representation based distance and the difference between the distances travelled by each
GA [76]. Zhou et al. used the greedy strategy to generate the ini- salesman [67]. Two approaches were evaluated: A multi-objective
tial population of GA, and combined the mutation operator with GA and a mono-objective GA. In 2016, Gu et al. studied the problem
2-opt local search algorithm [105]. In 2011, Chen et al. studied of cooperative trajectory planning integrated target assignment for
the two-part chromosome encoding technique which could solve multiple unmanned combat aerial vehicles, servicing multiple tar-
the MTSP efficiently, and suggested appropriate genetic operators gets [56]. The problem is formulated as a dynamics-constrained,
[106]. Ghafurian et al. designed an ant system to solve the multi multiple depots MTSP with neighborhoods. The solving process
depots and closed paths MTSP. In their study, the number of cities consists of two phases: first the directed graph is constructed and
that a salesman must visit has a certain range [78]. Actually, the the original problem is transformed to a standard asymmetric TSP;
MTSP they investigated is the most relative problem to our objec- then the Lin-Kernighan Heuristic searching algorithm is used.
tive problem, and it is also a problem without existing methods. In
order to examine the accuracy and efficiency of their algorithm, the
results in their experiment were compared to the answers obtained
by solving the same problems by Lingo 8.0 software, which uses 3. PGA for the MTSP
exact methods. In 2012, Sedighpour et al. proposed a modified
hybrid metaheuristic algorithm called GA2OPT [77]. The MTSP is Practice has demonstrated that GA exhibits promising perfor-
solved by the modified GA in each iteration at the first stage. At mance while solving TSP and other combinatorial optimization
the second stage, they used the 2-opt local search algorithm to problems [38,39]. The concept of GA was first proposed by J. D.
improve solutions. Yousefikhoshbakht et al. also introduced a two- Bagley in 1967 [43], and J. Holland began a systematic study on
phase algorithm [80]. At the first stage, the MTSP is solved by the the mechanism in 1975. Classic GA cannot fully solve the MTSP.
sweep algorithm, and at the second stage, the elite ACO and 3-opt It might generate one meaningless and illegal individual as it vio-
local search are used for improving solutions. In 2013, Yu et al. lates the problem requirements. As a result, it becomes necessary to
put forward a two-level hybrid algorithm [107], in which the top develop some special crossover operators, such as Partially Mapped
level is a GA and the bottom level employs branch-and-cut and Crossover operator (PMX), Order Crossover operator (OX), Cycle
Lin-Kernighan algorithms. Yuan et al. proposed a new two-part Crossover operator (CX), et al. [28–33]. The drawbacks are those
chromosome crossover (TCX) [72]. They firstly adopted the exist- special crossover operators are often too cumbersome to imple-
ing two-part chromosome technique in [69], and to overcome its ment, and the gene combination of parent chromosome might be
limits, they proposed TCX. Li et al. presented a new multiple trav- disrupted during the operation. Therefore, we choose to use PGA
eling salesman problem, denoted by MTSP* [108]. In MTSP*, cities instead of GA. PGA is a special GA without crossover operation. It
are divided into groups, each of which can be exclusively visited not only retains GA’s basic characteristics, but has a simpler genetic
by a predetermined salesman. They designed a GA to solve it, and operation. PGA enjoys a higher computational efficiency and does
designed three pairs of crossover and mutation operators. Wang not require the diversity of the original population. It avoids the
et al. proposed a new method based on the knowledge of Graph “premature convergence” and “population degradation” to some
Theory to solve a multiple depots and open paths MTSP [82]. Youse- extent. Some researchers have questioned the role of the crossover
fikhoshbakht et al. presented a new modified ACO mixed with operator. They believe that crossover operator does not play a sig-
insert, swap and 2-opt algorithm [83]. In 2014, Hosseinabadi et al. nificant role in the optimization of the population. PGA abolishes
presented a new hybrid algorithm, called GELS-GA, through a com- crossover operation, and replaces it with a variety of mutation
bination of GA and Gravitational Emulation Local Search (GELS) operations. The resulting processing of PGA is the same as in the
algorithms [84]. Cheikhrouhou et al. proposed a new market-based GA.
H. Zhou et al. / Applied Soft Computing 64 (2018) 564–580 567

3.1. Encoding method and initialization 1. The S-1 numbers should be arranged in increasing order.
2. The difference between every two adjacent numbers should not
We use the sequence encoding method, which is the most natu- be less than M.
ral way to express the solution of MTSP. Sequence encoding method 3. The first number should not be less than M
expresses the solution, namely the route, as a sequence of numbers. 4. The difference between N and the last number should not be less
Each city is marked with its own unique serial number in advance. than M
Thus we can order the number according to the salesman’s visit
order. The sequence encoding method is intuitive and reasonable. We name the above process of initializing the breakpoints the
It ensures that except for the starting city, every city is visited by Modify Breaks operation. The Modify Breaks operation can satisfy
salesman only once. the minimum number of cities constraint set for each salesman.
Using sequence encoding method, we express the actual popu-
lation by route population and breakpoint population. If both route
3.2. Fitness function
population and breakpoint population are represented by arrays,
then every row in the array is an individual. Each individual in the
The population is evaluated by the fitness function. In most
route population is the sequence of cites that all salesmen may have
cases, the greater the individual’s fitness value indicates that the
visited. Each individual in the breakpoint population is the break-
individual is more adaptive to the environment. Here, the fitness
points’ location sequence, which separates its corresponding city
value is the reciprocal of the total distance. The smaller total dis-
sequence into several parts, and each part is the route assigned for
tance (the greater reciprocal) suggests a better route.
a salesman. The actual individual in the proposed algorithm con-
sists of the individual in the route population and the corresponding
individual in the breakpoint population. It means that an individual 3.3. Selection and mutation
is represented by two parts. The first part is the route part and the
second part is the breakpoint part. After obtaining the fitness value of each individual in the current
For example, assume that 3 salesmen have to visit 10 cities; their population, we select individuals from the population to carry out
routes might look like this: reproduction of the next generation. Individuals with better fitness
values are selected. Those individuals are more likely to be retained
to breed the next generation. The commonly used selection meth-
– Salesman 1 starts from City 7, passes City 4 and City 6, finally ods are the fitness ratio method, the ranking selection method, the
returns to City 7; league selection method, the crowding model method, etc. We use
– Salesman 2 starts from City 2, passes City 5, City 1 and City 9, the roulette selection and elitist selection [41,42].
finally returns to City 2; The scale of the roulette is the cumulative fitness value of the
– Salesman 3 starts from City 3, passes City 10 and City 8, finally individual. The size of every part of the roulette represents the
returns to City 3. fitness value of every individual. If the individual has a greater fit-
ness value, then the individual has a bigger area in the roulette.
Rotate the roulette at a random speed. When it stops turning, the
As shown in Fig. 1, we can use two breakpoints to divide the individual that the pointer is pointing to is selected. Obviously, indi-
whole city sequence into three parts. The city sequence in the route viduals with greater fitness values are more likely to be selected
population should be like: Route = [7 4 6 2 5 1 9 3 1 0 8]. The break- than individuals with lower fitness values. However, this selec-
points’ location sequence in the breakpoint population should be: tion mechanism is too random. Relatively good individuals may be
Breaks = [3 7]. lost during the process. So, we further use elitist selection. Elitist
Carter and Ragsdale designed a two-part chromosome tech- selection selects the best individual and reserves it to the next gen-
nique [69] which is very similar to our encoding method. In their eration. It ensures that the individual with the highest fitness value
two-part chromosome technique, the first part of the chromosome will definitely be retained to the next generation. It is an effective
is a permutation of integers from 1 to n, representing the n cities. way to enhance the stability of the algorithm and to accelerate the
The second part of the chromosome is of length m and represents convergence speed.
the number of cities assigned to each of the m salespersons. Fig. 2 Mutation operator randomly chooses some individuals and
illustrates an example of their two-part chromosome representa- directly changes some entries of them. A given route muta-
tion for the same 10 city MTSP with 3 salesmen. As shown in Fig. 2, tion probability and a given breakpoint mutation probability are
Salesman 1 visits City 7, 4, and 6 (in that order), Salesman 2 vis- required. When mutating the route population, a random number
its City 2, 5, 1, and 9, and Salesman 3 visits City 3, 10, and 8. In between 0 and 1 for each individual is generated. If the random
our method, the first part is also a permutation of integers from number is less than the route mutation probability, then mutation
1 to n, representing the n cities. However, the second part is not will be performed.
the same; it represents a breakpoints’ location sequence and sepa- Before performing the mutation, firstly, a section, I to J, should
rates the city permutation into several parts. Each part is the route be randomly obtained, meaning that in the route, the segment I to
assigned for a salesman. J will be mutated. Additionally, an insertion position P should be
We initialize the population randomly. Suppose we have N randomly generated based on I and J.
cities, S salesmen, and the minimum number of cities that each We define four kinds of route mutation operation: FlipInsert,
salesman must have travelled equals to M. In this situation, each SwapInsert, LSlideInsert and RSlideInsert. Which type of the muta-
individual in the original route population should be a random per- tion operation will be implemented is selected randomly.
mutation of integers from 1 to N, representing the N cities. Each
individual in the original breakpoint population is of length S-1. 1. FlipInsert: Serial number in segment I to J will be reversed, then
It is a set of S-1 numbers, among which, each number is different the whole I to J segment will be inserted at the insertion position
and randomly chosen from 1 to N. Although the S-1 numbers are P.
randomly chosen, in order to meet the requirement that each sales- 2. SwapInsert: Serial number in position I and serial number in
man must travel not less than M cities, every combination of the position J will be swapped, then the whole I to J segment will
S-1 numbers has to satisfy the following constraints: be inserted at the insertion position P.
568 H. Zhou et al. / Applied Soft Computing 64 (2018) 564–580

Fig. 1. Example of a solution representation for a 10 city MTSP with 3 salesmen.

Fig. 2. Example of the two-part representation proposed by Carter and Ragsdale.

3. LSlideInsert: Serial number in segment I to J will be cycle shifted genetic operation of the above PGA is not good enough and the
to the left by one position, then the whole I to J segment will be search ability is not strong enough. It causes the algorithm trapped
inserted at the insertion position P. into local optima, coming with relatively poor convergence ability
4. RSlideInsert: Serial number in segment I to J will be cycle shifted and encountering premature convergence. It makes the algorithm
to the right by one position, then the whole I to J segment will unable to improve the quality of the result even if we increase the
be inserted at the insertion position P. number of iterations. Designing better selection or mutation opera-
tors or combining some other good local search algorithm properly
Similarly, when mutating the breakpoint population, a random can effectively overcome the problem. Considering this, we pro-
number between 0 and 1 for each individual is generated. If the pose an improved PGA, namely IPGA. By improving the selection
random number generated is less than the breakpoint mutation operation and mutation operation of the original PGA, IPGA finds
probability, produce a new set of breakpoints for the individual the global optimal solution as soon as possible and maintains the
using the same method when initializing the breakpoints. basis of biological diversity. IPGA improves the overall performance
The four kinds of mutation operated in the segment of I to J of PGA. Encoding method and initialization scheme in IPGA are the
ensure that the algorithm exhibits good local search abilities. More- same as PGA.
over, the operation of inserting the whole I to J segment into a
position P, makes sure that the algorithm can enjoy good global 4.1. Fitness function
search ability even in the absence of crossover operation. The off-
spring population is able to jump out of local optimum. According Due to the fact that IPGA will no longer use the roulette selec-
to how the above mutation operations are operated, it can be seen tion, the fitness value is directly computed as the total distance that
that such mutation operations avoid the problem of infeasibility salesmen have travelled. The smaller the total distance is, the better
in both of the new route chromosomes and the new breakpoint the fitness of the individual is.
chromosomes.
4.2. Algorithm flowchart
4. IPGA for the MTSP
The flowchart of the IPGA is shown in Fig. 3.
Owing that the above PGA needs both route mutation probabil-
ity and breakpoint mutation probability, the values have to be set in
advance. However, we usually do not know the optimal value. It is 4.3. Genetic operation of IPGA
necessary to set appropriate parameters in GA like mutation prob-
ability. The above PGA depends much on the parameters. However, The process of genetic operation of IPGA proceeds as follows:
even the parameter selection itself is an optimization problem. We
can only get the most appropriate parameter value by a trial-and- 1. Randomly select 10 individuals that have not been selected from
error method. Roulette selection selects individuals too randomly. the contemporary population.
Elitist selection can only prevent outstanding individuals from acci- 2. Find the best individual that has the best fitness in 10 individuals
dentally being eliminated in the process of replication, but still, just selected.
good solutions are often ignored. Moreover, random mutation may 3. Create a temporary population that consists of 10 individuals. All
cause the genetic characteristics of the chromosome to be easily of the 10 individuals are assigned to the best individual found in
damaged. Thus, the number of invalid mutations will gradually procedure 2.
increase. This problem will become more serious when the solu- 4. Generate 2 random mutation segment selection points I and J,
tion becomes more optimized. Overall, we are of opinion that the and the mutation segment insertion location P.
H. Zhou et al. / Applied Soft Computing 64 (2018) 564–580 569

Fig. 3. IPGA processing.

5. Mutate each individual in the temporary population created in sons, the global search ability and the local search ability of IPGA
procedure 3 in different ways: do nothing, FlipInsert, SwapInsert, are even more excellent compared to PGA. All in all, it is easier for
LSlideInsert, RSlideInsert or Modify Breaks. The specific process IPGA to find the optimal solution. The IPGA has higher efficiency
is as follows: and better convergence ability.
(1) Do nothing to the first individual.
(2) The second individual performs the FlipInsert operation. 5. PSO for the MTSP
(3) The third individual performs the SwapInsert operation.
(4) The fourth individual performs the LSlideInsert operation. J. Kennedy and R.C. Eberhart proposed particle swarm optimiza-
(5) The fifth individual performs the RSlideInsert operation. tion algorithm in [46]. Their inspiration comes from the study of
(6) The sixth individual performs the Modify Breaks operation social behavior of birds flocking. The theoretical principle of PSO
(which is introduced in Section 3.1). is to consider each bird in the flock as a particle, and give the par-
(7) The seventh individual performs the FlipInsert operation ticle memory ability, which can help the particle find the optimal
and the Modify Breaks operation. solution through communication with other particles in the swarm.
(8) The eighth individual performs the SwapInsert operation Standard PSO algorithm considers each individual as a particle
and the Modify Breaks operation. that has no mass or volume, and extends to the N dimension. The
(9) The ninth individual performs the LSlideInsert operation positions of the particles in the N dimensional space are poten-
and the Modify Breaks operation. tial solutions to the problem. The position of the ith particle in
(10) The tenth individual performs the RSlideInsert operation the N dimensional space is represented by the vector Xi = (Xi1 , Xi2 ,
and the Modify Breaks operation. . . ., Xin ), and the flight speed is expressed as vector Vi = (Vi1 , Vi2 ,
6. Join the temporary population that has already performed muta- . . ., Vin ). Particle swarm optimization algorithm is an optimization
tion operation into new population tool based on iteration. During each iteration, particles update their
7. If all of the individuals in the contemporary population have speed and position according to the following formulas.
been selected, then continue, because it means that a new whole
offspring population has been generated; otherwise go back to Vi+1 = ω × Vi + c1 × r1 × (pbest − Xi ) + c2 × r2 × (gbest − Xi ) (2)
procedure 1.
Xi+1 = Xi + Vi+1 (3)

The genetic operation of IPGA is sound. The mutation operations In the formulas:
in PGA such as FlipInsert, SwapInsert, LSlideInsert and RSlideInsert Vi : particle velocity after the (i)th iteration.
are also used in IPGA. Thus IPGA comes with all advantages of PGA. Vi+1 : particle velocity after the (i + 1)th iteration.
In addition, the above procedure 5 ensures that the best individual Xi : particle position after the (i)th iteration.
found in procedure 2 will definitely remain in the offspring popula- Xi+1 : particle position after the (i + 1)th iteration.
tion. The best individual will be mutated in a few different ways to pbest: the best position currently found so far of this particle.
produce new individuals. Some of the new individuals are produced gbest: the best position currently found so far of all the particles
by changing the route of the best individual. Some are produced by in the entire swarm.
changing the breakpoints of the best individual. Other new indi- ω: the inertia weight;
viduals are produced by changing the both. Thus, the mutation c1 c2 : learning factors;
operation comprehensively considers the mutation of both route r1 r2 : random numbers that are uniform distribution between
sequence and breakpoints sequence. Some potential models are 0 and 1.
avoided to be eliminated in the initial search process. Furthermore,
procedure 5 ensures that the algorithm will not ignore second best 5.1. Position and velocity
solutions in the iterative process. Because the individual that is
selected and used to be mutated to produce offspring is exactly The position of each particle indicates the potential solution of
a relatively good individual. Based on the above-mentioned rea- the searching problem. When using PSO to solve the MTSP, the
570 H. Zhou et al. / Applied Soft Computing 64 (2018) 564–580

position of each particle indicates one salesmen’s possible route. into B. Most cases are simple, in which we do not have to carry out
The representation of the particle position is exactly the same as the operation for (n-1) times.
that of the population in PGA. The particle position is composed of :
particle route position and particle breakpoint position. Obtain the sequence. Assuming A = (1,5,3,2,4) is a city sequence,
The particle velocity will change the particle position. Particle B = (2,5,1,3,4) is also a city sequence, AB means to get the exchange
velocity and position are produced by the PSO formulas. If we use sequence S, that conforms to A ⊕ S = B. In other word, S = AB, and
the standard PSO formulas, we can never obtain a new position that S = ((1,3),(1,4)).
conforms to the problem no matter how we represent the veloc- :
ity. The reason is because pbest, gbest and Xi are all city sequence Similarity calculation. S1 S2 tries to get the intermediate value
orders. If we directly add or subtract two city sequences, the result is between S1 and S2 by comparing S1 and S2 . The intermediate value
meaningless. Therefore, it is necessary to modify the standard PSO is the result of learning from S1 and S2 . In specific, S1 S2 means
formulas to let the particle learn from its current best position and the following calculation: S1  S2 = S1 + (S2 − S1) ÷ 2. S1 = (1,3),
the current global best position. Moreover, we should represent S2 = (1,5), S = S1 S2 , so S = (1,4) (2.3 means rounding up 2.3, so
the particle velocity appropriately so that the velocity can influ- its value should be 3). Assuming S1 = ((1,3),(1,4),(1,1),(1,1)) is an
ence and change the particle position. Due to the meaning of the exchange sequence, S2 = ((1,5),(1,3),(1,4),(1,2)) is also an exchange
particle breakpoint position, the particle breakpoint position can sequence, so S1 S2 = ((1,4),(1,4),(1,3),(1,2)).
be updated by learning from both itself and the current best parti- After introducing the new operators, it is known that the parti-
cle in the whole swarm without velocity. As a result, we only set up cle route velocity should be an exchange sequence. New formulas
the velocity for particle route position, namely, the particle route ensure that when solving MTSP, each particle can update its route
velocity. The new PSO formulas are as follows: position by learning from both its own current best route position
i+1 i
  i
 and the current best route position in the whole swarm. Also, the
VRoute = ω × VRoute   c1 × r1 × pbestRoute  XRoute  breakpoint position updates in this way. Then, set the inertia weight
 i
 and learning factors appropriately. Therefore, the particle can have
c2 × r2 × gbestRoute  XRoute  (4)
appropriate motion inertia, the ability of self-summary and self-
learning, and the ability to learn from the excellent individuals in
i+1 i i+1
the swarm.
XRoute = XRoute ⊕ VRoute (5)

i+1
 i
  i
 5.2. Initialization and the fitness function
XBreak = r3 × pbestBreak  XBreak   r4 × gbestBreak  XBreak 
The position and velocity of particles are initialized randomly.
(6)
Due to the fact that the representation of the particle position is
In the formulas: the same as that of population in PGA, their initialization meth-
i
VRoute : particle route velocity after the (i)th iteration; ods are also the same. Each particle route velocity in the particle
i+1
VRoute : particle route velocity after the (i + 1)th iteration; swarm is an exchange sequence. Randomly generate the original
i exchange sequence. Suppose we have N cities, the numbers in the
XRoute : particle route position after the (i)th iteration;
i+1
original exchange sequence are random integers between 1 and N.
XRoute : particle route position after the (i + 1)th iteration; In addition, PSO adopts the fitness function in IPGA.
pbestRoute: particle’s own best route position currently;
gbestRoute: The best route position of all particles currently;
i 5.3. Infeasibility in PSO
XBreak : particle breakpoint position after the (i)th iteration;
i+1
XBreak : particle breakpoint position after the (i + 1)th iteration; The process of mutation operations in PGA and IPGA assures
pbestBreak: particle’s own best breakpoint position currently; that no infeasible solutions would occur in the chromosomes after
gbestBreak: best breakpoint position of all particles currently; mutation. The exchange sequence operator in PSO also ensures that
ω: the inertia weight; there is no chance of infeasibility in the particle route position.
c1 c2 : learning factors; However, we have to fix the infeasibility of particle route velocity
r1 r2 r3 r4 : random numbers that are uniform distribution and particle breakpoint position.
between 0 and 1. When using (4) to update particle route velocities, it
The meaning of newly defined operators is as follows: is likely that we would obtain some infeasible exchange
⊕:
Exchange sequence operator. S = (i, j) represents an exchange
sequences. For example, suppose we have  N cities, ini for-
mula (4), after calculating c1 × r1 × pbestRoute  XRoute 
sequence. Exchange sequence S acts on a city sequence, that is  i

and c2 × r2 × gbestRoute  XRoute , we will obtain two
swapping the serial number in position i and serial number in posi-
intermediate exchange sequences, but the value range of
tion j in this city sequence. If a city sequence is A = (1,2,3,4), an
which might differ from 1 to N. The same thing  happens
exchange sequence is S = (1,2), using ⊕ to represent the action that i
after calculating c1 × r1 × pbestRoute  XRoute   c2 ×
exchange sequence S acts on the city sequence A, so after A ⊕ S, we  i
 i i
r × gbestRoute  XRoute , ω × VRoute , and ω × VRoute 
will obtain the city sequence B = (2,1,3,4). 2  i
  i

Assuming that A = (1,5,3,2,4) is a city sequence, B = (2,5,1,3,4) is c1 × r1 × pbestRoute  XRoute
  c2 × r2 × gbestRoute  XRoute  .
also a city sequence, if we want to get B after A ⊕ S, we need to We fix this kind of infeasibility by directly changing the value of
swap the serial number in A for several times. In this example, we the number to 1 if the number in the exchange sequence is smaller
can obtain B by swapping the first number and the third number in than 1, and directly changing the value of the number into N if the
A, then swapping the first number and the fourth number. So the number in the exchange sequence is greater than N.
exchange sequence S should be expressed as S = ((1,3),(1,4)). It is also likely that we would obtain some infeasible par-
In addition, under the circumstance that the sequence A and ticle breakpoint positions when using (6). For example, every
sequence B both have the same n numbers, but those n numbers time when we use the  operator in (6), we would obtain an
may be arranged in different ways. In the worst case, we need to intermediate breakpoint sequence. This intermediate breakpoint
swap numbers in A for at least (n-1) times so that A can be changed sequence might not satisfy the four breakpoints constraints men-
H. Zhou et al. / Applied Soft Computing 64 (2018) 564–580 571

tioned in Section 3.1. If the new breakpoint sequence does not Table 1
Values of parameters used in algorithms.
satisfy the four breakpoints constraints mentioned in Section
3.1, we fix this kind of infeasibility by replacing the infeasible Size of population/Number of particles 100
result with the left operand of the operator, which should be Mutation rate in PGA 0.01

a rational breakpoint in (6). The same issue happens after calcu-
i i
 W–PSO 1.1
lating r3 × pbestBreak  XBreak and r4 × gbestBreak  XBreak . C1 − PSO 2
For this infeasibility problem, we do recalculation until a feasible C2 − PSO 2

solution is found.
Table 2
5.4. Algorithm − a general flow of processing Test instance and obtained solutions.

Instance Cities Optimal in TSPLIB PGA IPGA PSO


(1) Initialize the particle swarm. Namely, assign original particle
1
route position XRoute randomly for each particle, and assign st70 70 675 1055.4422 677.1945 2670.2450

exchange sequence VRoute 1 randomly for XRoute1 . Additionally, for


1
each particle route position XRoute , randomly assign its origi-
1 sample instances for the TSP and related problems. Unlike the well-
nal particle breakpoint position XBreak which can satisfy the
known TSP, there is no open benchmark for testing MTSP. To test
minimum number of cities requirement.
each algorithm’s validity, we transfer our MTSP approaches into
(2) Set pbest for each particle in the original particle swarm.
1 TSP approaches. We set the number of salesmen to one, and then
Namely, set pbestRoute of each particle as this particle’s XRoute ,
1 use the benchmark instances presented in TSPLIB.
and set pbestBreak of each particle as this particle’s XBreak .
The parameter configuration for algorithms in this experiment
(3) Set gbest for original particle swarm. Namely, find the particle
is reported in Table 1. We select these values after a thorough
that has the best fitness in the original particle swarm, then
1 parameters tuning. Conclusions from other existing studies on
set gbestRoute as this particle’s XRoute , and set gbestBreak as this
1
the parameter choosing issue are also considered [87–95]. The
particle’s XBreak . parameter selection is an optimization problem. [55] presents
(4) Let i = 1 (i indicates the current generation). a state-of-the-art survey of parameter control strategy. Parame-
i
(5) Based on particles’ current VRoute i
, XRoute , pbestRoute and gbe- ters in their study include population size, mutation rates, etc. To
i+1
stRoute, use formula (4) to get VRoute . compare the impact of various parameters on the algorithm perfor-
i i+1
(6) Based on particles’ current XRoute and VRoute , use formula (5) to mance, we complete numerous tests. Eventually we discover that
i+1 the values in Table 1 are the most appropriate ones that can speed
get XRoute .
i
(7) Based on particles’ current XBreak , pbestBreak and gbestBreak, up the process. For this reason, we chose the values in Table 1 to do
i+1 all of the following experiments.
use formula (6) to get XBreak .
i+1 i+1
The test instance and the solutions obtained by each algorithm
(8) Update pbest of each particle. Based on XRoute and XBreak of each are showed in Table 2. It’s worth mentioning that the results in
particle to calculate each particle’s fitness value. If the new Table 2 are all obtained from only a single test, meaning that they
fitness value is better than the fitness value that calculated may not be the optimal solutions.
from this particle’s pbestRoute and pbestBreak, then update this From Table 2, we can conclude that our two PGA algorithms and
i+1 i+1
particle’s pbestRoute and pbestBreak as XRoute and XBreak . the one PSO algorithm can solve the TSP to some extent. The IPGA
(9) Update gbest. Find the particle that has the best fitness value in can produce a high-quality solution. The result of IPGA is really close
the particle swarm. If this particle’s fitness value is better than to the optimal solution in TSPLIB, and it is superior than PGA and
the fitness value that is calculated from gbestRoute and gbest- PSO. Since the solutions obtained with IPGA is so much better than
Break, then update gbestRoute and gbestBreak as this particle’s PGA and PSO, to reveal the performance of IPGA, we conduct more
i+1 i+1
XRoute and XBreak . experiments for IPGA using the instances from TSPLIB. We com-
(10) Let i = i + 1. If i is bigger than the specified maximum number pare our results with both the TSPLIB benchmark and the results
of iterations, then the algorithm is finished; otherwise turn to provided by other existing approaches. Totally seven TSP problems
step (5). are solved. The comparison and the results of various methods
for the tested TSPLIB instances are listed in Table 3. The compu-
6. Experimental studies tational results demonstrate that the IPGA is very competitive with
the state-of-the-art methods in the literature, and it is capable of
We put forward a series of comparative experiments on PGA, finding high quality solutions.
IPGA and PSO. In each of the comparative experiments, all three Due to the fact that we research the symmetric minsum MTSP
algorithms share the same configurations and the same irrel- with multiple depots, closed paths, and a minimum city num-
evant variables, including the same original population/particle ber restriction, as far as we know, there is no specific method
swarm and the same city locations (Irrelevant variables refer to proposed for this problem. To compare the performance of our
the variables in the algorithm that would not drastically affect algorithms with other approaches that may appear in the future,
the performance). One state of the art method from literature we report our test results using the TSPLIB instances in Table 4.
called invasive weed optimization algorithm is implemented and The experimental instances are with different problem size and
compared with PGA, IPGA and PSO. The experiment platform salesmen. We use the eil51, eil76, eil101, kroA100, kroA150, and
is MacBook Pro with 2.7 GHz Intel Core i5 processor and 8 GB kroA200, because these instances are the most frequently used
1867 MHz DDR3 memory. instances by other researchers when solving other types of MTSP
[48,66,68,81–83,100,106]. In Table 4, M refers to the number of
6.1. Algorithm validity test salesmen, and M is from 2 to 5. The reason why we test our algo-
rithms with these numbers of salesmen is because in other MTSP
First set of experiments is to test each algorithm’s validity. In literatures, 2–5 are the most frequently tested numbers of sales-
this experiment, the test data comes from TSPLIB (http://comopt. men when using the above instances [48,66,68,81–83,100,106]. All
ifi.uni-heidelberg.de/software/TSPLIB95/)[58]. TSPLIB is a library of instances’ computational results are listed in Table 4. As we only
572 H. Zhou et al. / Applied Soft Computing 64 (2018) 564–580

Table 3
Results of comparative studies.

Instance st70 eil51 eil76 eil101 kroA100 kroA150 kroA200

TSPLIB 675 426 538 629 21282 26524 29368


Our IPGA 677.1945 428.9816 555.7116 661.9916 21483.2780 28685.4181 34520.5099
[17] – 429 538 – 21282 – –
[18] 677.109 – 544.369 – 21285.443 – –
[19] – 433.05 562.93 689.67 – – –
[36] – 426 538 629 21282 – 29369
[16] 675 426 – – – – –
[6] – – 547.3 – – 26789.2 29934.9
[37] – 768.181 1263.521 – 84224.503 – 206281.560
[47] 689.4 – 558.7 659.0 21625.7 – –
[3] – 436.16 554.73 – – – –
[2] – 426 538.3 632.1 – – –
[15] 677 427 547 646 21352 27458 –

Table 4
Test instances and their solutions with multiple salesmen.

Instance NumofCities Algorithm M=2 M=3 M=4 M=5

eil51 51 IWO 651.3250 897.5994 1129.9851 1465.2615


PGA 641.1901 652.6491 665.7826 610.0616
IPGA 421.2434 414.1578 420.4573 429.4046
PSO 1119.7457 1058.7641 1083.1644 1079.2563
eil76 76 IWO 856.8731 1184.7318 1508.8931 1807.4436
PGA 1127.4644 1106.2985 1158.0345 1067.0613
IPGA 562.2230 565.7708 593.9784 546.0394
PSO 1823.7799 1867.5903 1849.7804 1766.2633
eil101 101 IWO 1021.5849 1418.9286 1754.8493 2121.4964
PGA 1743.5693 2007.0738 1829.0473 1870.0026
IPGA 671.5268 735.1369 749.9429 720.0511
PSO 2637.4476 2620.6196 2584.8125 2564.5750
kroA100 100 IWO 32968.9254 45697.6433 58404.7952 72544.3838
PGA 67156.0333 71686.2766 75062.3573 77737.3361
IPGA 21996.9516 25889.1331 23312.4546 24271.6862
PSO 124690.4115 127181.2091 124405.8296 123017.4943
kroA150 150 IWO 41918.5382 57256.0079 72961.4759 89122.4975
PGA 158081.3039 161911.9380 160985.4548 164183.7529
IPGA 35169.8579 44004.4575 36070.8517 46004.4067
PSO 204422.1032 196907.7808 199202.4626 201597.2370
kroA200 200 IWO 45619.4740 66659.3110 82029.6028 103767.1568
PGA 232679.5527 229190.7521 231062.0829 244364.0721
IPGA 47679.0755 53166.9713 59225.4007 58581.9413
PSO 270488.1042 274854.1682 276537.2630 270835.0476

tested for once, the results have not been optimized. Addition- Table 5
Computing overhead.
ally, Column Instance lists the names of tested instances. Column
NumofCities lists the number of cities in the instance problem. The PGA IPGA PSO IWO
minimum number of cities that each salesman must go through is 7.97 s ± 0.48 21.42 s ± 0.15 329.82 s ± 28.48 31.72 s ± 2.07
set as 1. The following results can also reveal the performance of
IPGA is superior in terms of the quality of solution. The solution
values of IPGA are shown in bold. Table 6
We further implement the invasive weed optimization algo- Computing overhead.
rithm with local search (IWO), which is the best performing PGA IPGA PSO IWO
algorithm among the three algorithms proposed by Venkatesh and
4.98 s ± 0.03 21.68 s ± 3.19 154.80 s ± 4.48 25.68 s ± 2.52
Singh [79]. We use IWO [79] to solve our MTSP. The solution values
of IWO are underlined and shown in italics in Table 4. Very obvi-
ously may see from Table 4 that our PGA, IPGA and PSO all perform
much better than IWO. in Table 5. The value before “ ± ” is average time consumption comes
from 35 tests, and the value after “ ± ” is the standard deviation
comes from the same 35 tests. As a comparison, we implement the
6.2. Comparative experiments and algorithm performance IWO [79]. The convergence ability of IWO is shown in Fig. 6. The
solution obtained with IWO is shown in Fig. 7.
In this part, we test our algorithms for 35 cities, 5 salesmen and We also test our algorithms for 15 cities, 3 salesmen, 5000 iter-
the minimum number of cities that each salesman must have trav- ations and the minimum number of cities that each salesman must
elled is 3. We use 5000 iterations because after 5000 iterations the have travelled is 3. Other parameters are the same as shown in
algorithms’ convergence becomes flat. Other parameters are the Table 1. Fig. 8 displays the algorithm convergence ability of three
same as shown in Table 1. Fig. 4 displays the convergence ability algorithms this time, and Fig. 9 shows the test solution which is
of the three algorithms and Fig. 5 shows the test solution which expressed as route map obtained with the three algorithms this
is expressed as route map obtained with the three algorithms. The time. The time consumption of the algorithms under this condition
time consumption of the algorithms under this condition is shown is showed in Table 6 along with the standard deviations. IWO [79]
H. Zhou et al. / Applied Soft Computing 64 (2018) 564–580 573

Fig. 4. The algorithm convergence ability curve:(a) PGA; (b) IPGA; (c) PSO.

is implemented as a comparison. The convergence ability of IWO


is shown in Fig. 10. The solution obtained with IWO is shown in
Fig. 11.
Fig. 5. The test solution expressed as route map obtained with the algorithms:(a)
We can see from Figs. 4 and 8 that IPGA has already reached a PGA; (b) IPGA; (c) PSO.
steady state of convergence after 5000 iterations, however, appar-
ently PGA and PSO have not. Second, all of the figures above show
that the solutions of PGA and the solutions of PSO are similar (PGA’s
solution is slightly better than PSO’s solution), but they did not
574 H. Zhou et al. / Applied Soft Computing 64 (2018) 564–580

Fig. 6. IWO convergence ability curve.

Fig. 7. The test solution expressed as route map obtained with IWO.

converge to a solution as good as IPGA. Third, as it is showed in


Figs. 4 and 8, the solution distribution of IPGA does not fluctuate
as significantly as PGA and PSO. PGA fluctuates more significantly
than PSO. All above suggest that compared to PGA and PSO, IPGA
can converge quickly and smoothly with a better solution. More-
over, from Figs. 5 and 9, salesmen’s routes in the PGA and PSO are
more likely to cross together and cause collision than IPGA. It means
that the routes obtained from IPGA are more reasonable. Yet, from
Tables 5 and 6, PGA is the least time-consuming method, and PSO
is the most time-consuming one. The computing overhead of IPGA
and IWO are similar.
On the whole, all three algorithms have good convergence
ability. Compared to PGA and PSO, IPGA has a better search per-
formance, stronger global exploration ability, and much better
convergence ability. Additionally, IPGA is also more likely to find
a reasonable solution. The result that IPGA obtains is so satisfy-
ing that we can still say IPGA is the best even though it requires
more operation time than PGA. It is fast to run PGA, and it is easy
Fig. 8. The algorithm convergence ability curve:(a) PGA; (b) IPGA; (c) PSO.
for IPGA to find the optimal solution within a relatively short time
period, but it is a much more time-consuming and difficult task for
PSO.
PGA, IPGA and PSO, all perform much better comparing to IWO.
As shown in Figs. 6 and 10, IWO has a worse convergence ability.
Also, as shown in Figs. 7 and 11, IWO plunges into some local optima
at an early iteration and fails to jump out of it. It stops search-
ing new solution space, and thus, IWO obtains a worse solution in
the end. However, salesmen’s routes in the IWO are less likely to
H. Zhou et al. / Applied Soft Computing 64 (2018) 564–580 575

Fig. 10. IWO convergence ability curve.

Fig. 11. The test solution expressed as route map obtained with IWO.

cross together and cause collision than salesmen’s routes in PGA


and PSO.

6.3. Results for different numbers of salesmen and numbers of


cities

At last, we explore how the number of salesmen and the number


of cities can influence the solution and compared the three algo-
rithms’ test results. In the following tests, the number of iteration is
set as 1000, and the minimum number of cities that each salesman
must have travelled equal to 1. Other parameters are set in Table 1.
The average optimal solution is the average of the solutions in 35
tests, and the standard deviation comes from the same 35 tests.
The computational results of the three algorithms are reported in
Table 7–9. When increasing the number of salesmen, the value of
the average optimal solution mostly tends to minimize. However,
there are some cases the value becomes to increase. Those values
are underlined and shown in bold and italics in Table 7–9. The rela-
tionship between the number of salesmen, the number of cities,
and the average optimal solution of each algorithm is displayed in
Fig. 12.
From Table 7–9, the average optimal solutions of IPGA are much
Fig. 9. The test solution expressed as route map obtained with the algorithms:(a)
PGA; (b) IPGA; (c) PSO. better than PGA’s and PSO’s. There are some cases where the stan-
dard deviation of IPGA is higher than that of PGA and PSO, but there
are no significant differences. The objective problem is a minisum
problem, so obtaining a minimum value of the solution is more
important. Even though some standard deviations of IPGA is higher
576
Table 7
Average optimal solutions and standard deviations of PGA.

Average Optimal Solution The Number of Cities


± Standard Deviation
5 10 15 20 25 30 35 50 80 100

The Number of 2 12.3 ± 0.00 24.22 ± 0.57 33.38 ± 3.05 50.48 ± 5.00 73.85 ± 6.99 94.08 ± 7.28 109.22 ± 6.34 182.55 ± 11.82 321.66 ± 12.23 433.14 ± 19.45
Salesmen 3 3.78 ± 0.00 19.50 ± 1.28 29.47 ± 2.07 48.29 ± 5.83 75.46 ± 5.83 95.79 ± 9.26 110.93 ± 8.41 176.64 ± 11.94 319.92 ± 14.14 429.95 ± 16.58
4 1.17 ± 0.00 15.49 ± 1.90 30.62 ± 3.17 47.98 ± 4.72 76.11 ± 7.96 94.80 ± 6.38 110.50 ± 9.70 180.50 ± 10.63 321.85 ± 15.38 431.25 ± 13.15
5 – 12.59 ± 1.52 29.03 ± 3.80 45.76 ± 4.58 71.71 ± 6.74 95.49 ± 7.87 110.00 ± 10.32 180.16 ± 11.87 319.61 ± 11.82 427.38 ± 15.59
6 – 8.15 ± 1.29 23.43 ± 3.38 44.54 ± 5.59 73.58 ± 7.60 91.67 ± 8.27 111.75 ± 9.93 183.45 ± 15.41 320.21 ± 12.62 427.53 ± 18.27
7 – 4.46 ± 0.00 21.44 ± 3.23 41.79 ± 5.45 67.90 ± 5.69 88.30 ± 7.80 107.55 ± 11.05 176.49 ± 12.64 313.17 ± 11.91 426.43 ± 13.04
8 – 1.85 ± 0.00 17.19 ± 2.23 36.64 ± 5.14 62.00 ± 6.87 87.64 ± 8.38 106.65 ± 9.98 175.51 ± 12.21 308.80 ± 10.94 425.36 ± 14.56
9 – 0.68 ± 0.00 16.39 ± 3.29 32.34 ± 4.00 63.03 ± 7.78 84.07 ± 7.18 104.22 ± 9.8 171.02 ± 11.00 309.53 ± 16.50 424.74 ± 18.50

H. Zhou et al. / Applied Soft Computing 64 (2018) 564–580


10 – – 11.89 ± 2.19 30.13 ± 4.29 57.73 ± 6.35 78.97 ± 8.43 100.71 ± 8.88 172.11 ± 10.64 309.96 ± 14.40 421.88 ± 13.37

Table 8
Average optimal solutions and standard deviations of IPGA.

Average Optimal Solution The Number of Cities


± Standard Deviation
5 10 15 20 25 30 35 50 80 100

The Number of 2 12.31 ± 0.00 23.18 ± 0.00 26.81 ± 0.17 30.75 ± 0.32 39.11 ± 1.08 45.20 ± 1.71 51.32 ± 3.16 70.72 ± 3.52 118.41 ± 6.69 152.30 ± 7.79
Salesmen 3 3.78 ± 0.00 18.62 ± 0.00 25.07 ± 0.32 29.96 ± 0.97 38.37 ± 1.84 46.00 ± 2.53 51.53 ± 2.41 75.65 ± 4.69 126.74 ± 6.78 164.58 ± 9.43
4 1.17 ± 0.00 13.85 ± 0.00 24.10 ± 0.32 29.80 ± 1.52 38.30 ± 2.85 47.11 ± 4.34 51.57 ± 4.84 79.71 ± 5.04 137.22 ± 11.63 176.50 ± 11.71
5 – 10.97 ± 0.00 21.20 ± 0.73 28.75 ± 1.58 37.61 ± 3.33 47.76 ± 2.62 53.52 ± 4.15 81.78 ± 7.03 143.21 ± 11.04 186.41 ± 13.07
6 – 7.55±0.00 18.06 ± 0.40 27.12 ± 1.08 38.36 ± 2.81 47.68 ± 3.73 54.96 ± 4.09 84.78 ± 6.23 146.66 ± 10.50 195.81 ± 15.03
7 – 4.46 ± 0.00 15.20 ± 0.44 25.23 ± 1.44 37.46 ± 3.37 43.78 ± 3.49 54.47 ± 4.65 85.39 ± 7.69 153.23 ± 13.42 204.40 ± 17.13
8 – 1.85 ± 0.00 12.13 ± 0.54 22.71 ± 1.57 34.82 ± 2.85 43.86 ± 3.85 51.76 ± 3.64 85.48 ± 6.48 156.83 ± 12.04 206.58 ± 13.86
9 – 0.68 ± 0.00 9.52 ± 0.26 20.32 ± 1.57 34.21 ± 2.59 42.93 ± 3.37 52.12 ± 3.42 84.52 ± 7.29 157.52 ± 8.68 213.45 ± 12.79
10 – – 7.08 ± 0.20 17.92 ± 1.43 30.66 ± 2.69 41.23 ± 3.48 50.98 ± 4.19 87.27 ± 7.35 163.67 ± 10.46 218.52 ± 13.27
Table 9
Average optimal solutions and standard deviations of PSO.

Average Optimal Solution The Number of Cities


± Standard Deviation
5 10 15 20 25 30 35 50 80 100

The Number of Salesmen 2 12.31 ± 0.00 23.53 ± 0.35 36.70 ± 1.48 56.44 ± 2.88 80.80 ± 3.75 100.30 ± 5.32 119.02 ± 3.99 184.59 ± 5.32 312.06 ± 7.70 405.67 ± 9.64
3 3.78 ± 0.00 19.38 ± 1.28 34.07 ± 1.71 52.03 ± 2.82 78.13 ± 3.07 96.57 ± 4.20 115.84 ± 4.78 181.22 ± 6.34 308.38 ± 6.04 402.41 ± 8.83
4 1.17 ± 0.00 15.28 ± 1.10 32.00 ± 1.67 50.69 ± 2.35 74.90 ± 2.66 95.97 ± 3.72 113.26 ± 4.19 179.97 ± 5.07 304.31 ± 7.79 401.26 ± 6.77
5 – 11.01 ± 0.10 27.99 ± 1.55 46.54 ± 2.29 71.66 ± 3.02 91.73 ± 2.49 110.37 ± 3.45 174.99 ± 4.83 299.53 ± 8.33 397.84 ± 8.52
6 – 7.62 ± 0.22 23.97 ± 1.40 43.38 ± 2.09 68.04 ± 3.26 88.20 ± 4.34 105.57 ± 3.68 172.00 ± 4.08 296.78 ± 7.60 391.33 ± 8.06
7 – 4.46 ± 0.00 20.91 ± 1.89 39.24 ± 2.53 62.72 ± 2.65 82.43 ± 3.83 102.15 ± 5.80 165.94 ± 6.49 292.07 ± 8.90 387.62 ± 5.53
8 – 3.33 ± 0.81 16.42 ± 1.38 36.71 ± 2.89 58.29 ± 2.54 79.22 ± 4.14 97.44 ± 4.89 163.04 ± 6.25 289.81 ± 6.53 380.85 ± 8.38
9 – 0.68 ± 0.00 13.00 ± 0.89 32.84 ± 2.67 54.45 ± 3.73 76.04 ± 3.08 93.31 ± 3.99 157.76 ± 7.05 286.61 ± 9.03 379.51 ± 9.74

H. Zhou et al. / Applied Soft Computing 64 (2018) 564–580


10 – – 9.54 ± 0.85 29.07 ± 2.59 51.41 ± 2.59 72.00 ± 4.12 90.65 ± 3.84 159.07 ± 6.37 281.51 ± 6.62 377.41 ± 7.74
The performance of PGA and the performance of PSO are similar.
still draw the conclusion that the performance of IPGA is the best.
with the solution of PGA and PSO. As a result, from Table 7–9, we can
IPGA algorithm, its worst solution is still relatively good compared
than that of PGA and PSO, which suggests large fluctuations of the

solution of each algorithm: (a) PGA; (b) IPGA; (c) PSO.


Fig. 12. The relationship between the number of salesmen and the average optimal

577
578 H. Zhou et al. / Applied Soft Computing 64 (2018) 564–580

In Fig. 12, with the increase of the number of cities, the advan- 7. Conclusions
tage of IPGA is more obvious. To be specific, when the number of
cities is small, the average optimal solutions of three algorithms all MTSP exhibits a wide range of applicability, for example, in a
become smaller with the increase of the number of salesmen. When mobile robot agent path planning optimization [61]. The MTSP
the number of city become bigger, the average optimal solution of studied in this paper is a typical NP-complete problem, however
IPGA starts to show the tendency toward becoming larger with the with limited research efforts. We first introduce a PGA to solve the
increase of the number of salesmen. Also, the average optimal solu- MTSP. Then we improve the PGA and propose IPGA. Finally, we
tion of PGA and PSO both no longer decrease but fluctuate with the present a swarm intelligence approach based on the PSO algorithm.
increase of the number of salesmen. Additionally, PGA’s tendency IWO [79] is implemented as a comparison. Algorithm performance
to fluctuate is stronger. So, we can draw the conclusion, compared is analyzed and compared through a series of experimental stud-
with PGA and PSO, IPGA not only has a variety of advantages in ies. The results demonstrate that IPGA has the best performance;
algorithm performance showed in Experiment 2, but also enables GA could be superior for solving discrete problems. We also explore
us to decide the appropriate number of salesmen more easily. Such that the performance of the algorithms has different effects on how
a decision can even be easier to make when the number of cities is the number of salesmen and the number of cities would affect the
big. solution.
This phenomenon found from the experimental data has some The proposed algorithms are feasible and effective in solving
practical significance. From the economic point of view, the num- complex optimization problems. However, there is still room for
ber of cities is certain for a given problem, so the actual question improvement of algorithm efficiency. And some code adjustment
is how to determine the optimal number of salesmen in a certain could be further employed to optimize the performance. Future
range that can bring the maximum economic benefits. For PGA and research will focus on other evolutionary computational tech-
PSO, the relatively good number of salesmen is the number before nique such as ant colony algorithm (ACO). Improvements could
fluctuation appears; and for IPGA, if the given number of cities has be anticipated. We may also consider other categories of relevant
the tendency that the average optimal solution increases with the stochastic problems, such as the multi-objective knapsack com-
increase of the number of salesmen, then the relatively good num- monly encounter in combinatorial optimization.
ber of salesmen can be the smallest number in the certain range,
otherwise we need to combine other factors to determine the opti-
mal number of salesmen. Acknowledgement

Support from the National Natural Science Foundation of China


(NSFC) 61773352, the projects of Communication University of
6.4. Summary of experiments China and Fund of CSCSE are gratefully appreciated.

In Experiment 1, we compare our results with the known opti-


mal solutions in TSPLIB. By doing this, the ability of three algorithms References
to solve TSP can be verified. Among PGA, IPGA and PSO, IPGA’s opti-
[1] M. Held, A.J. Hoffman, E.L. Johnson, P. Wolfe, Aspects of the traveling
mal solution is very close to the optimal solution in TSPLIB, so IPGA salesman problem, IBM J. Res. Dev. 28 (1984) 476–486.
can solve TSP most effectively. To reveal the performance of IPGA, [2] X. Xiong, A. Ning, Cellular competitive decision algorithm for traveling
more experiments on IPGA are conducted. We compare the solu- salesman problem, Enterprise Systems Conference (ES) (2014) 221–225.
[3] L. Hengyu, C. Jiqing, H. Quanzhen, X. Shaorong, L. Jun, An improvement of
tions with the results provided by other existing approaches in the fruit fly optimization algorithm for solving traveling salesman problems,
literature. The computational results demonstrate that the IPGA Proc International Conference on Information and Automation (ICIA) (2014)
is very competitive with the state-of-the-art methods for TSP. We 620–623.
[4] S. Singh, E.A. Lodhi, Comparison study of multiple traveling salesmen
also report our test results using the TSPLIB instances in Table 4, and problem using genetic algorithm, Int. J. Comput. Sci. Netw. Secur. 14 (2014)
compare PGA, IPGA and PSO with the one existing method called 107.
IWO. [5] M. Melanie, An Introduction to Genetic Algorithms, A Bradford Book, The
MIT Press, 1998.
Experiment 2 compares and analyses the algorithms mainly [6] N. Lai, J. Zheng, Hybrid genetic algorithm for TSP, Proc Seventh International
from the angle of the algorithm performance. In Experiment 2, we Conference OnComputational Intelligence and Security (CIS) 3-4 (2011).
compare the convergence ability, the quality of the solution, and [7] S. Forrest, B. Javornik, R.E. Smith, A.S. Perelson, Using genetic algorithms to
explore pattern recognition in the immune system, Evol. Comput. 1 (1993).
the time consumption of the PGA, IPGA and PSO. Comparison with [8] V. Singh, Varsha, A.K. Misra, Detection of unhealthy region of plant leaves
IWO is also made. The results show that IPGA is the best. The rea- using image processing and genetic algorithm, in: Proc International
son why PGA and PSO are not as good as IPGA is that they have Conference on Advances InComputer Engineering and Applications
(ICACEA), 19–20 March, 2015.
two more serious shortcomings. First, “super individual” is more
[9] G.A. Smara, F. Khalefah, Localization of license plate number using dynamic
likely to occur at the beginning of searching causing premature image processing techniques and genetic algorithms, IEEE Trans. Evol.
convergence; second, differences between individuals become too Comput. 18 (2014) 244–257.
small in the later stage of searching, causing the algorithm to trap [10] A. Fernandez, Granada, S. Garcia, J. Luengo, E. Bernado-Mansilla, F. Herrera,
Genetics-Based machine learning for rule induction: state of the art,
in local optimum and stagnation. The fluctuation appeared in the taxonomy, and comparative study, IEEE Trans. Evol. Comput. 14 (2010)
performance of PGA and PSO. Some good individuals are lost in 913–941.
the process of evolution. It leads to occasional degradation, and [11] H. Razmi, M. Teshnehlab, H.A. Shayanfar, Neural network based on a genetic
algorithm for power system loading margin estimation, IET Generation,
also causes the algorithm sometimes cannot emerge from the local Transm. Distrib. 6 (2012) 1153–1163.
optimal solution. [12] Z. Zhang, Neural networks adaptive control of aircraft engine based on
In Experiment 3, we study how the number of salesmen and the genetic algorithm, in: Proc. 26th Chinese Control and Decision Conference,
31 May–2 June 2014, 2014.
number of cities will affect the solution. It can be found that larger [13] M. Mitchell, S. Forrest, Genetic algorithms and artificial life, Artif. Life 1
number of salesmen does not necessarily generate a better solution. (1994) 267–289.
So for a MTSP with given number of cities, we can definitely deter- [14] C. Bierwirth, D.C. Mattfeld, Production scheduling and rescheduling with
genetic algorithms, Evol. Comput. 7 (1999) 1–17.
mine an appropriate number of salesmen combining with other [15] S. Laha, A quantum-inspired cuckoo search algorithm for the travelling
factors. Experiment 3 also shows that IPGA is better than PGA and salesman problem, Proc International Conference on Computing,
PSO. Communication and Security (ICCCS) (2015) 1–6.
H. Zhou et al. / Applied Soft Computing 64 (2018) 564–580 579

[16] Y. Liu, J. Huang, A novel genetic algorithm and its application in TSP, in: Proc [43] J.D. Bailey, The Behavior of Adaptive Systems Which Employ Genetic and
IFIP International Conference on Network and Parallel Computing, NPC, Correlation Algorithms PhD Thesis, University of Michigan, 1967.
October 2008, 2008, pp. 263–266. [44] R. Eberhart, Y. Shi, Comparison between genetic algorithms and particle
[17] C. He, B. Wei, W. Jin, A new population-based incremental learning method swarm optimization, Evolutionary Programming VII Springer Berlin
for the traveling salesman problem, Proc. Congress on Evolutionary Heidelberg (1998) 611–616.
Computation, CEC 99 vol. 2 (1999). [45] K. Boudjelaba, F. Ros, D. Chikouche, Potential of particle swarm optimization
[18] X.S. Yan, H.M. Liu, J. Yan, Q.H. Wu, A fast evolutionary algorithm for traveling and genetic algorithms for FIR filter design, Circuits, Syst. Signal Process. 33
salesman problem, August 2007, in: Proc. Third International Conference on (2014) 3195–3222.
Natural Computation, ICNC, vol. 4, 2007, pp. 85–90. [46] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proc IEEE
[19] Y. Wei, Y. Hu, K. Gu, Parallel search strategies for TSPs using a greedy genetic International Conference on Neural Networks, Piscataway, NJ 1942–1948,
algorithm, August 2007, in: Proc. Third International Conference on Natural 1948.
Computation, ICNC, vol.3, 2007, pp. 786–790. [47] E. Osaba, R. Carballedo, F. Diaz, E. Onieva, P. Lopez, A. Perallos, On the
[20] W. Elloumia, H.E. Abeda, A. Abrahama, A.M. Alimia, A comparative study of influence of using initialization functions on genetic algorithms solving
the improvement of performance using a PSO modified by ACO applied to combinatorial optimization problems: a first study on the TSP, Proc IEEE
TSP, Appl. Soft Comput. 25 (2014) 234–241. Conference on Evolving and Adaptive Intelligent Systems (EAIS) (2014) 1–6.
[21] Y. Marinakisa, M. Marinakib, A hybrid multi-Swarm particle swarm [48] N. Ernest, K. Cohen, Fuzzy logic clustering of multiple traveling salesman
optimization algorithm for the probabilistic traveling salesman problem, problem for self-crossover based genetic algorithm, in: 50th AIAA Aerospace
Comput. Oper. Res. 37 (2010) 432–442. Sciences Meeting Including the New Horizons Forum and Aerospace
[22] M. Fa, Network traffic prediction based on particle swarm optimization, Exposition, January 2012, 2012, p. 487.
Proc International Conference OnIntelligent Transportation, Big Data and [49] S. Sadiq, The traveling salesman problem: optimizing delivery routes using
Smart City (ICITBS) (2015). genetic algorithms, in: SAS Global Forum, Chicago, IL USA, 2012, pp. 1–6.
[23] S. Nair, E. Coronado, M. Frye, T. Goldaracena, C. Arguello, Particle Swarm [50] U. Klanšek, Using the tsP solution for optimal Route scheduling in
Optimization for the control of a swarm of biological robots, Proc Annual Construction Management, Organization, Technology & Management in
IEEEIndia Conference (INDICON) (2015). Construction: An International Journal 3 (2011) 243–249.
[24] M. Rashid, K. Kamal, T. Zafar, Z. Sheikh, A. Shah, S. Mathavan, Energy [51] M.A. Muniandy, L.K. Mee, L.K. Ooi, Efficient route planning for travelling
prediction of a combined cycle power plant using a particle swarm salesman problem, in: Proc IEEE Conference on Open Systems (ICOS),
optimization trained feed forward neural network, Proc International October 2014, 2014, pp. 24–29.
Conference on Mechanical Engineering, Automation and Control Systems [52] T.P. Bagchi, J.N. Gupta, C. Sriskandarajah, A review of TSP based approaches
(MEACS) (2015). for flowshop scheduling, Eur. J. Oper. Res. 169 (2006) 816–854.
[25] M. Yang, H. Zhang, Fuzzy control system of extracting for chinese traditional [53] S.G. Ponnambalam, H. Jagannathan, M. Kataria, A. Gadicherla, A TSP-GA
medicine based on particle swarm optimization, in: Proc Second WRI Global multi-objective algorithm for flow-shop scheduling, Int. J. Adv. Manuf.
Congress on Intelligent Systems (GCIS), 16-17 December, 2010. Technol. 23 (2004) 909–915.
[26] M.R. Umak, K.S. Raghuwanshi, R. Mishra, Review on speedup and accurate [54] K. Bharath-Kumar, J. Jaffe, Routing to multiple destinations in computer
intrusion detection system by using MSPSO and data mining technology, in: networks, IEEE Trans. Commun. 31 (1983) 343–351.
Proc IEEE Students’ Conference on Electrical, Electronics and Computer [55] G. Karafotias, M. Hoogendoorn, Á.E. Eiben, Parameter control in evolutionary
Science (SCEECS), 1-2 March, 2014. algorithms: trends and challenges, IEEE Trans. Evol. Comput. 19 (2015)
[27] V.K. Minimol, R.S. Shaji, Optimization of scheduling and decision making in 167–187.
elderly homecare system using particle swarm optimization, in: Proc. [56] X. Gu, X. Cao, Y. Xie, J. Chen, X. Sun, Cooperative trajectory planning for
International Conference on Control, Instrumentation, Communication and multi-UCAV using multiple traveling salesman problem, 35th Chinese
Computational Technologies (ICCICCT), 10-11 July, 2014. Control Conference (CCC) (2016) 2722–2727.
[28] D.E. Goldberg, R. Lingle, Alleles, loci, and the traveling salesman problem, [57] A.S. Rostami, F. Mohanna, H. Keshavarz, A.A.R. Hosseinabadi, Solving
Lawrence Erolaum, Hillsdale, New Jersey, July 1985, in: Proc. First multiple traveling salesman problem using the gravitational emulation local
International Conference on Genetic Algorithms and Their Applications, vol. search algorithm, Appl. Math 9 (2015) 699–709.
154, 2017, pp. 154–159. [58] G. Reinelt, TSPLIB—A traveling salesman problem library, ORSA J. Comput. 3
[29] L. Davis, Applying adaptive algorithms to epistatic domains, in: Proc. (1991) 376–384.
International Joint Conference on Artificial Intelligence, IEEE Computer [59] F. Li, D. Miao, W. Pedrycz, Granular multi-label feature selection based on
Society Press, Los Angeles, 1985, pp. 162–164. mutual information, Pattern Recogn. 67 (2017) 410–423.
[30] G. Syswerda, Schedule Optimization Using Genetic Algorithms, Handbook of [60] D. Miao, F. Xu, Y. Yao, L. Wei, Set-Theoretic formulation of granular
Genetic Algorithms, Van Nostrand Reinhold, New York, 1991, pp. 332–349. computing, Chin. J. Comput. 35 (2012) 351–363.
[31] I.M. Oliver, D.J. Smith, J.R.C. Holland, A study of permutation crossover [61] S.T. Brassai, B. Iantovics, C. Enachescu, Optimization of Robotic Mobile Agent
operators on the TSP, in: Proceedings of the Second International Navigation, Studies in Informatics and Control, ISSN, 2012 (1220–1766).
Conference, Lawrence Erolaum New Jersey, 1987, pp. 224–230. [62] T. Bektas, The multiple traveling salesman problem: an overview of
[32] D. Whitley, T. Starkweather, D. Fuquay, Scheduling problems and traveling formulations and solution procedures, Omega 34 (2006) 209–219.
salesman: the genetic edge recombination operator, in: Proc. Third [63] G. Laporte, Y. Nobert, A cutting planes algorithm for the m-salesmen
International Conference on Genetic Algorithms, Morgan ZKaufmann problem, J. Oper. Res. Soc. 101 (1980) 7–1023.
Publishers Los Altos, 1989, pp. 133–140. [64] B. Gavish, K. Srikanth, An optimal solution method for large-scale multiple
[33] F. Su, F. Zhu, Z. Yin, H. Yao, Q. Wang, W. Dong, New crossover operator of traveling salesmen problems, Oper. Res. 34 (1986) 698–717.
genetic algorithms for the TSP, in: Proc International Joint Conference on [65] I. Kara, T. Bektas, Integer linear programming formulations of multiple
Computational Sciences and Optimization, 24–26 April 2009, 2009. salesman problems and its variations, Eur. J. Oper. Res. 174 (2006)
[34] M. Li, T. Tong, An improved Partheno-genetic algorithm for travelling 1449–1458.
salesman problem, in: Proc. 4th World Congress on Intelligent Control and [66] R. Necula, M. Breaban, M. Raschip, Tackling the Bi-criteria facet of multiple
Automation, 10–14 June 2002, 2002. traveling salesman problem with ant colony systems, in: IEEE 27th
[35] S. Li, Z. Wu, X. Pang, Hybrid partheno-genetic algorithm and its application International Conference on Tools with Artificial Intelligence (ICTAI),
in flow-shop problem, J. Syst. Eng. Electron. 15 (2004) 19–24. November 2015, 2015, pp. 873–880.
[36] L. Li, Y. Zhang, An improved genetic algorithm for the traveling salesman [67] R.M. Alves, C.R. Lopes, Using genetic algorithms to minimize the distance
problem, in: Proc. International Conference on Intelligent Computing, and balance the routes for the multiple traveling salesman problem, in: Proc
Springer Berlin Heidelberg August 2007, 2007, pp. 208–216. IEEE Congress on Evolutionary Computation (CEC), May 2015, 2015, pp.
[37] M.A.H. Akhand, S. Akter, S.S. Rahman, M.H. Rahman, Particle swarm 3171–3178.
optimization with partial search to solve traveling salesman problem, in: [68] K. Sundar, S. Rathinam, An exact algorithm for a heterogeneous, multiple
Proc International Conference on Computer and Communication depot, multiple traveling salesman problem, in: Proc International
Engineering (ICCCE), July 2012, 2012, pp. 118–121. Conference on Unmanned Aircraft Systems (ICUAS), June 2015, 2015, pp.
[38] L. Qu, R. Sun, A synergetic approach to genetic algorithms for solving 366–371.
traveling salesman problem, Information Sciences 117 (1999) 267–283. [69] A.E. Carter, C.T. Ragsdale, A new approach to solving the multiple traveling
[39] J.Y. Potvin, Genetic algorithms for the traveling salesman problem, Ann. salesperson problem using genetic algorithms, Eur. J. Oper. Res. 175 (2006)
Oper. Res. 63 (1996) 339–370. 246–257.
[40] S. Khan, Modeling TSP with particle swarm optimization and genetic [70] R. Bolaños, M. Echeverry, J. Escobar, A multiobjective non-dominated sorting
algorithm, in: Proc. 6th International Conference OnAdvanced Information genetic algorithm (NSGA-II) for the Multiple Traveling Salesman Problem,
Management and Service (IMS), 30 November–2 December 2010, 2010. Decision Sci. Lett. 4 (2015) 559–568.
[41] S.M. Kilambi, B. Nowrouzian, A genetic algorithm employing correlative [71] N. Chandran, T.T. Narendran, K. Ganesh, A clustering approach to solve the
roulette selection for optimization of FRM digital filters over CSD multiplier multiple travelling salesmen problem, Int. J. Ind. Syst. Eng. 1 (2006) 372–387.
coefficient space, in: Proc IEEE Asia Pacific Conference on Circuits and [72] S. Yuan, B. Skinner, S. Huang, D. Liu, A new crossover approach for solving
Systems, 4–7 December, 2006. the multiple travelling salesmen problem using genetic algorithms, Eur. J.
[42] J. Balicki, Task assignments in logistics by adaptive multi-criterion Oper. Res. 228 (2013) 72–82.
evolutionary algorithm with elitist selection, in: Proc Federated Conference [73] E.C. Brown, C.T. Ragsdale, A.E. Carter, A grouping genetic algorithm for the
on Computer Science and Information Systems, 7–10 September, 2014. multiple traveling salesperson problem, Int. J. Inf. Technol. Decis. Making 6
(2007) 333–347.
580 H. Zhou et al. / Applied Soft Computing 64 (2018) 564–580

[74] A. Singh, A.S. Baghel, A new grouping genetic algorithm approach to the [92] J.C. Bansal, P.K. Singh, M. Saraswat, A. Verma, S.S. Jadon, A. Abraham, Inertia
multiple traveling salesperson problem, Soft Computing-A Fusion of weight strategies in particle swarm optimization, in: Proc Third World
Foundations, Methodol. Appl. 13 (2009) 95–101. Congress on Nature and Biologically Inspired Computing (NaBIC), October
[75] B. Soylu, A general variable neighborhood search heuristic for multiple 2011, 2011, pp. 633–640.
traveling salesmen problem, Comput. Ind. Eng. 90 (2015) 390–401. [93] T. Beielstein, K.E. Parsopoulos, M.N. Vrahatis, Tuning PSO Parameters
[76] A. Király, J. Abonyi, A novel approach to solve multiple traveling salesmen Through Sensitivity Analysis, Universität Dortmund, 2002.
problem by genetic algorithm, in: Proc Computational Intelligence in [94] M.E.H. Pedersen, Good Parameters for Particle Swarm Optimization, Hvass
Engineering, Springer Berlin Heidelberg, 2010, pp. 141–151. Lab., Copenhagen, Denmark, 2010 (Tech. Rep. HL1001).
[77] M. Sedighpour, M. Yousefikhoshbakht, N. MahmoodiDarani, An effective [95] V.A. Rane, Particle swarm optimization (PSO) algorithm: parameters effect
genetic algorithm for solving the multiple traveling salesman problem, J. and analysis, Int. J. Innov. Res. Dev. 2 (2013) (ISSN 2278-0211).
Optim. Ind. Eng. (2012) 73–79. [96] E. Wacholder, J. Han, R.C. Mann, A neural network algorithm for the multiple
[78] S. Ghafurian, N. Javadian, An ant colony algorithm for solving fixed traveling salesmen problem, Biol. Cybern. 61 (1989) 11–19.
destination multi-depot multiple traveling salesmen problems, Appl. Soft [97] J. Potvin, G. Lapalme, J. Rousseau, A generalized k-opt exchange procedure
Comput. 11 (2011) 1256–1262. for the MTSP, INFOR, Inf. Syst. Oper. Res. 27 (1989) 474–481.
[79] P. Venkatesh, A. Singh, Two metaheuristic approaches for the multiple [98] C.Y. Hsu, M.H. Tsai, W.M. Chen, A study of feature-mapped approach to the
traveling salesperson problem, Appl. Soft Comput. 26 (2015) 74–89. multiple travelling salesmen problem, IEEE International Sympoisum on
[80] M. Yousefikhoshbakht, M. Sedighpour, A combination of sweep algorithm Circuits and Systems (1991) 1589–1592.
and elite ant colony optimization for solving the multiple traveling [99] P.M. França, M. Gendreau, G. Laporte, F.M. Müller, The m-traveling salesman
salesman problem, Proc the Romanian Academy A 13 (2012) 295–302 (4). problem with minmax objective, Transp. Sci. 29 (1995) 267–275.
[81] Q. Yu, D. Wang, D. Lin, Y. Li, C. Wu, A novel two-level hybrid algorithm for [100] S. Somhom, A. Modares, T. Enkawa, Competition-based neural network for
multiple traveling salesman problems, in: Advances in Swarm Intelligence, the multiple travelling salesmen problem with minmax objective, Comput.
2012, pp. 497–503. Oper. Res. 26 (1999) 395–407.
[82] X. Wang, D. Liu, M. Hou, A novel method for multiple depot and open paths, [101] C.H. Song, K. Lee, W.D. Lee, Extended simulated annealing for augmented
Multiple Traveling Salesmen Problem, in: Proc 11th International TSP and multi-salesmen TSP, July 2003, in: Proc.of the International Joint
Symposium on Applied Machine Intelligence and Informatics (SAMI), Conference on Neural Networks, 3, 2003, pp. 2340–2343.
January 2013, 2013, pp. 187–192. [102] F. Zhao, J. Dong, S. Li, X. Yang, An improved genetic algorithm for the
[83] M. Yousefikhoshbakht, F. Didehvar, F. Rahmati, Modification of the ant multiple traveling salesman problem, in: Control and Decision Conference,
colony optimization for solving the multiple traveling salesman problem, CCDC Chinese, July 2008, 2008, pp. 1935–1939.
Romanian J. Inf. Sci. Technol. 16 (2013) 65–80. [103] W. Liu, S. Li, F. Zhao, A. Zheng, An ant colony optimization algorithm for the
[84] A.A. Hosseinabadi, M. Kardgar, M. Shojafar, S. Shamshirband, A. Abraham, multiple traveling salesmen problem, in: 4th IEEE Conference on Industrial
GELS-GA: hybrid metaheuristic algorithm for solving multiple travelling Electronics and Applications, May 2009, 2017, pp. 1533–1537.
salesman problem, in: Proc 14th International Conference on Intelligent [104] P. Oberlin, S. Rathinam, S. Darbha, A transformation for a heterogeneous,
Systems Design and Applications (ISDA), November 2014, 2014, pp. 76–81. multiple depot, multiple traveling salesman problem, American Control
[85] O. Cheikhrouhou, A. Koubâa, H. Bennaceur, Move and improve: a distributed Conference (2009) 1292–1297.
multi-robot coordination approach for multiple depots multiple travelling [105] W. Zhou, Y. Li, An improved genetic algorithm for multiple traveling
salesmen problem, in: Proc International Conference on Autonomous Robot salesman problem, in: 2nd International Asia Conference on Informatics in
Systems and Competitions (ICARSC), May 2014, 2014, pp. 28–35. Control Automation and Robotics (CAR), 6-7 March, 2010.
[86] H. Larki, M. Yousefikhoshbakht, Solving the multiple traveling salesman [106] S.H. Chen, M.C. Chen, Operators of the two-part encoding genetic algorithm
problem by a novel meta-heuristic algorithm, J. Optim. Ind. Eng. 7 (2014) in solving the multiple traveling salesmen problem, Proc of International
55–63. Conference on Technologies and Applications of Artificial Intelligence (TAAI)
[87] T. Amudha, B.L. Shivakumar, Parameter Optimization in Genetic Algorithm (2011) 331–336.
and Its Impact on Scheduling Solutions, Computational Intelligence in Data [107] F.M. Gonzalez-Longatt, Optimal offshore wind farms’ collector design based
Mining, 1, Springer, India, 2015, pp. 469–477. on the multiple travelling salesman problem and genetic algorithm, IEEE
[88] R.L. Haupt, Optimum population size and mutation rate for a simple real GrenobleonPowerTech (POWERTECH) (2013) 1–6.
genetic algorithm that optimizes array factors, Proc. IEEE Antennas and [108] J. Li, Q. Sun, M. Zhou, X. Dai, A new multiple traveling salesman problem and
Propagation Society International Symposium vol 2 (2000) 1034–1037. its genetic algorithm-based solution, in: IEEE International Conference on
[89] J.J. Grefenstette, Optimization of control parameters for genetic algorithms, Systems Man, and Cybernetics (SMC), October, 2013, pp. 627–632.
IEEE Trans. Syst. Man Cybern. 16 (1986) 122–128. [109] R. Russell, An effective heuristic for the m-tour traveling salesman problem
[90] T. Bäck, H.P. Schwefel, An overview of evolutionary algorithms for with some side condition, Oper. Res. 23 (1977) 517–524.
parameter optimization, Evol. Comput. 1 (1993) 1–23.
[91] S.G.B. Rylander, Optimal population size and the genetic algorithm,
Population 100 (2002) 900.

You might also like