You are on page 1of 12

Applied Soft Computing 61 (2017) 714–725

Contents lists available at ScienceDirect

Applied Soft Computing


journal homepage: www.elsevier.com/locate/asoc

A modified two-part wolf pack search algorithm for the multiple


traveling salesmen problem
Yongbo Chen, Zhenyue Jia, Xiaolin Ai, Di Yang, Jianqiao Yu ∗
School of Aerospace Engineering, Beijing Institute of Technology, Beijing, 100081, China

a r t i c l e i n f o a b s t r a c t

Article history: This paper proposes a modified two-part wolf pack search (MTWPS) algorithm updated by the two-part
Received 23 June 2016 individual encoding approach as well as the transposition and extension (TE) operation for the multiple
Received in revised form 22 February 2017 travelling salesmen problem (MTSP). Firstly, the two-part individual encoding approach is introduced
Accepted 18 August 2017
into the original WPS algorithm for MTSP, which is named the two-part wolf pack search (TWPS) algo-
Available online 31 August 2017
rithm, to minimize the size of the problem search space. Secondly, the analysis of the convergence rate
performance is presented to illustrate the reasonability of the maximum terminal generation of the novel
Keywords:
TWPS algorithm deeply. Then, based on the definition of the global reachability, the TWPS algorithm is
Multiple travelling salesmen problem
(MTSP)
modified by the TE operation further, which can greatly enhance the search ability of the TWPS algorithm.
Modified two-part wolf pack search Finally, focusing on the objective of minimizing the total travel distance and the longest tour, compar-
(MTWPS) algorithm isons of the robustness and the optimality between different algorithms are presented, and experimental
Transposition and extension (TE) operation results show that the MTWPS algorithm can obtain higher solution quality than the other the ones of the
Convergence rate other two methods
Reachability © 2017 Published by Elsevier B.V.

1. Introduction according to Ref. [1], which can be used in different scenes and
different practical applications. For example, when the MTSP has
The multiple traveling salesmen problem (MTSP), which is the only one depot, it is more suitable for express delivery services
extension of the traveling salesman problem (TSP), is a well-known [2]. On the other hand, when the MTSP includes multiple depots,
and important problem in operational research. The characteristic which is a more adequate simulation of real life situations, it can be
of the MTSP can be summarized as: m (m ≥ 2) salesmen visit n cities used in many problems, such as hot rolling scheduling [3], vehi-
(n ≥ m) and their paths between the cities and the fixed depots form cle scheduling problem (VSP) [4] and so on. Additionally, if the
a group of Hamilton circuits without sub-tours. The purpose of the tasks of MTSP consider the time window constraints, the MTSP
MTSP is to seek an optimal path in these paths. So the main differ- can be viewed as a multiple traveling salesmen problem with time
ence between MTSP and TSP is that the tasks of the MTSP demand windows (MTSPTW) [5,6]. One of the famous applications of the
more than one salesman, which leads to higher complexity. Similar MTSPTW is the mission planning problem. The mission planning
to the TSP, in essence, the MTSP is a non-deterministic polynomial problem generally arises in the context of autonomous mobile
hard (NP-hard) problem, which means that this problem cannot be robots and unmanned aerial vehicles (UAV). The applications of
solved in polynomial time on a regular computer. So seeking out the MTSPTW in mission planning problem are reported by the fol-
a high-efficient algorithm to obtain a sub-optimal solution in an lowing references: M. Alighanbari et al. make a research on the
acceptable CPU time is the main challenge for the MTSP. task allocation problem for a fleet of UAVs with tightly coupled
Compared with the TSP, even though the MTSP has a higher tasks and rigid relative timing constraints [7]. L. Evers et al. tackle
difficulty of solution-finding, it has a wider range of applications, this online stochastic UAV mission planning problem with time
especially in various routing and scheduling problems. According windows and time-sensitive targets using a re-planning approach
to the number of depots, the MTSP can be divided into four forms under the frame of the MTSPTW [8]. Besides the mission planning
problem, the MTSPTW is also applied in some situations about the
transportation of goods. For instance, B. Skinner et al. solve the
∗ Corresponding author.
transportation scheduling problem of the containers in the Patrick
AutoStrad container terminal [9]. Moreover, X. B. Wang and A. C.
E-mail addresses: bit chenyongbo@163.com (Y. Chen), jiazhenyue@163.com
(Z. Jia), bitaixiaolin@163.com (X. Ai), bluegirl 625@126.com (D. Yang), Regan solve the local truckload pickup and delivery problems in
jianqiao@bit.edu.cn (J. Yu).

http://dx.doi.org/10.1016/j.asoc.2017.08.041
1568-4946/© 2017 Published by Elsevier B.V.
Y. Chen et al. / Applied Soft Computing 61 (2017) 714–725 715

the view of the MTSPTW [5]. In short, it is extremely significant famous and commonest exact algorithm is the branch-and-bound
and valuable to research the MTSP and its variations. method, which is firstly proposed to solve the large-scale sym-
The heuristic optimization methods, which are powerful tools metric MTSP by Gavish and Srikanth [16]. Then, there are some
for the NP-hard problems, have captured much attention of the other kinds of branch-and-bound methods in solving the MTSP.
researchers. In order to solve the MTSP, a novel heuristic optimiza- For example, S. Saad et al. combine the branch-and-bound algo-
tion algorithm called two-part WPS (TWPS), which is inspired by rithm with the Hungarian method to solve the MTSP [17]. Although
Refs. [10] and [11], is presented in this paper. Then, its weakness the exact algorithms have rigorous mathematical foundations, their
is discussed around the global reachability of the initial popula- problem-solving ability is completely dependent on the size of the
tion updated by the TWPS algorithm. Finally, aimed at overcoming problem. When the size of the problem grows, the solving time will
this weakness, a transposition and extension (TE) operation, which become unacceptable. As a result, there are a growing number of
dramatically improves the solution quality, is introduced to mod- scholars turning to the research of the heuristic algorithms, which
ify this algorithm. The main contributions of this paper include: can easily obtain an optimal, sub-optimal or feasible solution for
the presentations of the TWPS algorithm based on two-part indi- the large-sized MTSP in an acceptable CPU time.
vidual representation technique for the MTSP, the discussions of With the development of computer technology, the heuristic
the reachability problem and the convergence generation of the algorithms are developed quickly and applied broadly. There are
TWPS algorithm, and the modification of the TWPS algorithm (the many heuristic algorithms that are used to solve the MTSP, such as:
MTWPS algorithm). greedy algorithm [6], evolutionary algorithm [18,19], tabu search
This paper is organized as follows: In Section 2, a literature [20], simulated annealing (SA) algorithm [21,22], market-based
overview of current works on solving the MTSP and a brief sur- algorithm [23], artificial neural network (NN) approaches [24–26]
vey of related works on the WPS algorithm are introduced. Section and so on. In these methods, there is one kind of methods continu-
3 presents the original WPS algorithm and the TWPS algorithm ously concerned by the researchers: genetic algorithms (GA).
updated by the two-part individual representation technique for The development process of GA in the MTSP is around the
the MTSP. Then, the discussion of the convergence generation of chromosome coding representation. The first reference utilizing
the TWPS algorithm and the definition of the gobal reachability of the GA for the solution of MTSP seems to be due to C. Malm-
the initial population in the TWPS algorithm are finished in Section borg [4]. He develops a GA with two chromosomes representation
4. Subsequently, the MTWPS algorithm based on the TE operation technique, which means that the first chromosome provides a per-
is presented to overcome the serious weaknesses of the algorithm mutation of the n cities and the second one assigns a salesman
in Section 5. Section 6 presents the experimental results. At last, to each of the cities in the corresponding position of the first
the conclusion and summary of this study are presented in Section chromosome, for the MTSP. Similarly, based on the same two chro-
7. mosome representation technique, Park Y.-B. proposes a hybrid
genetic algorithm (HGAV) incorporating a greedy interchange local
optimization algorithm for the vehicle scheduling problem with
2. Literature review service due times and time deadlines [27]. Beyond that, Tang, L.
et al. use a different GA with one chromosome representation tech-
As a widely applicable problem, the MTSP has been solved by nique, whose length of chromosome is n + m-1, to solve the MTSP
many approaches. In the tactical and problem conversion level, model developed for the hot rolling scheduling [28]. In later ref-
these approaches can be divided into two kinds: direct approach erence, Arthur E. Carter and Cliff T. Ragsdale put forward a new
and transformation approach. The direct approach means that this chromosome encoding scheme with a two-part chromosome rep-
approach can solve the MTSP directly, without any transformation resentation [4]. Based on these methods, Shuai Yuan et al. point
to the TSP. And if not, it is a transformation approach. For example, out that two chromosomes encoding schemes based on Refs. [27]
one of the first direct approaches to solve the MTSP is presented by and [28] have a larger number of redundant solutions in the search
Laporte and Nobert [12]. They propose an algorithm whose main space compared with the latter two-part chromosome represen-
characteristic is the relaxation of most of the constraints of the tation based on Ref. [4]. Additionally, in the recent literature, the
problem during its solution. Because the classical TSP has received authors of Ref. [28] present a multi-structure GA. Even though this
a great deal of attention and it is also a test problem for most representation doesn’t have the redundant problem, it may lead to
optimization methods, a lot of optimization methods can be used the overlarge storage space of the chromosome and the overly com-
to solve a standard TSP. Therefore, the transformation approach, plex updated process of crossover and mutation operation, when
which means to transform the MTSP into the standard TSP, is a the size of the MTSP is large. In a word, the two-part chromosome
common idea to solve the MTSP. In this way, most optimization representation is the best coding scheme so far, so this representa-
methods used in the TSP could be applied to the MTSP. For example, tion is used to improve the original WPS algorithm in this paper.
one of the pioneers using the transformation approach is Goren- In view of all the above heuristic algorithms, most of them can
stein S, whose main ideas are to add m-1 additional home cities to obtain the optimal/sub-optimal solution of the middle and small-
a MTSP with m traveling salesmen and to set the home-to-home sized problem easily, but for the large-sized problem their solution
distances as infinite at the same time [13]. In the recent literature, effects are tremendously different. Therefore, it is imperative to
P. Oberlin1 et al. present a transformation approach to a multi- find more efficient heuristic algorithms for the large-sized MTSP.
ple depots, multiple traveling salesmen problem, and then use the In this paper, the WPS algorithm is introduced and modified to
Kernighan-Lin (LKH) heuristic algorithm, which is one of the best solve the MTSP. The idea of the WPS algorithm is first presented
heuristics for solving the travelling salesman problem, to obtain by Chenguang Yang et al., which is used to be the local search-
the solution [14]. Even though some transformation approaches ing to replace the worker in Marriage in Honey Bees Optimization
are simple and feasible, the obtained TSP may be seriously deterio- (MBO) algorithm [29]. Then, it is presented again by Hu-Sheng Wu
rated, which leads to the solving difficulty of the new problem and and Feng-Ming Zhang with a few modifications and renamed as
is even harder to solve the original MTSP [15]. wolf pack algorithm (WPA). At the same time, some test optimiza-
In the sense of the optimization methods, these approaches can tion functions are solved by the WPS algorithm compared with the
also be classified into two kinds: exact algorithms and heuristic GA, the particle swarm optimization (PSO) algorithm, the artificial
algorithms [15]. The exact algorithms are a kind of early algorithms, fish swarm (ABC) algorithm, the artificial bee colony (ACO) algo-
which are based on the rigorous mathematical theory. The most rithm, and the firefly algorithm (FA) [30]. The results show that
716 Y. Chen et al. / Applied Soft Computing 61 (2017) 714–725

Fig. 1. The schematic diagram of the wolf pack search algorithm.

the original WPS algorithm has better convergence and robustness, Based on the above description of a wolf pack, we abstract the
especially for high-dimensional functions. The MTSP is a typical process of the WPS algorithm. The frame of the original WPS algo-
high-dimensional problem, whose dimension grows quickly with rithm consists of the following steps:
the number of cities and salesmen. Hence, also considering its nov- 1) Initialization −set the initial parameters, such as: popula-
elty, we choose to modify the WPS algorithm for this problem. In tion quantity N, approach rate Step, elitism quantity N’, the number
fact, the original WPS algorithm and the TWPS algorithm are not of weak wolves N*, local search scale R and so on, then randomly
good global optimization methods for the large-sized MTSP, which generate an initial wolf pack {wolfi , i = 1, 2, . . ., N};
will be discussed and shown in Sections 4 and 6. But there is a lot 2) Fitness – evaluate the cost functions of all wolves f(wolfi ), i = 1,
of room to improve, so it is improved and applied to the large-sized 2, . . ., N;
MTSP in this paper. 3) Elitism – select a more appropriate small group {wolfi ,
i = eltism1 , eltism2 , . . ., eltismN  } in the pack;
4) Safari – optimize in a small scale OR (wolfi ), i = eltism1 , eltism2 ,
. . ., eltismN  , and obtain a the best wolf GBest;
3. The original WPS algorithm and the novel TWPS
5) Approach −get close to the best wolf GBest:
algorithm
GBest − wolfi
wolfnewi = wolfi + step , i = 1, 2, . . ., N (1)
3.1. The original WPS algorithm |GBest − wolfi |

The WPS algorithm is inspired by the uniform action of a social 6) Replacement 1–replace the original best wolf GBest, if a new
wolf pack. They cooperate well with each other and attack their wolf wolfnew appears whose cost function f(wolfnew ) is optimal than
competitors and preys. The whole process of the hunting activity f(GBest).
of a wolf group can be summarized as the following process: Firstly, 7) Replacement 2–sort the whole wolf pack, replace last N* old
the safari wolves, which are more experienced and stronger than weak wolves by N* new random ones;
the others, need to be elected to walk around; Secondly, the safari 8) Loop – go back to step 2.
wolves will search the smell round them. One of the safari wolves Of course, the whole circulation will stop if an end condition is
may find the thickest odor which means the quarries are around satisfied, otherwise, it will continue.
here; Thirdly, it will inform all the other wolves by its howling. The
whole wolf pack members will move towards the directions of the 3.2. The TWPS algorithm for MTSP
howling. At the same time, the quarries can be constrained in a
smaller area; Finally, some wolves may stop their hunting activ- Based on the recent references [29] and [30], we can see that
ity, because their physical conditions are bad and they will die. the original WPS algorithm can be directly used in the real-valued
Instead, some new wolves may join the wolf pack. So the members unconstrained global optimization problems. All operations of the
of the wolf pack are updated constantly during the hunting pro- WPS algorithm are suitable for the continuous functions. However,
cess. The original WPS algorithm is to imitate the above process. the MTSP is a 0–1 integer programming problem with complex
The schematic diagram of the original WPS algorithm is shown in constraints. So it cannot be applied to the MTSP directly. In order
Fig. 1: to solve this problem, in this section, we propose a novel WPS
Y. Chen et al. / Applied Soft Computing 61 (2017) 714–725 717

x2 +. . .+ xm = n where xi > 0, i = 1, 2, . . ., m. Similarly, the cost func-


tions of these three wolves will be computed to obtain the optimal
one to replace the best wolf GBest in the new wolf pack. The best
wolf GBest is assumed to be <1, 2, 3, 4, 5, 6, 7, 8, 2, 2, 4>, which
means the first salesman visits cities in the order of 1 and 2, the
second salesman visits cities in the order of 3 and 4, and the third
salesman travels to cities in the sequence of 5, 6, 7 and 8.
Step 3.a: The safari of the first part of the elites wolves.

Fig. 2. Example of the two-part individual representation technique for 8 cities with
3 salesmen.

algorithm with a two-part individual representation, named TWPS


algorithm, for solving the MTSP. The two-part individual rep-
resentation technique is inspired by the two-part chromosome
technique in Refs. [11] and [4]. In this technique, the wolf is divided Step 3.b: The safari of the second part of the elites wolves.
into two parts, of which the first part is a permutation of n cities
and the second part is the number of cities assigned to the corre-
sponding salesman [11]. By this way, this representation which is
exclusive for each valid solution reduces the redundant solution
in the solution space furthest. The  size  of the solution space for
n−1
the two-part chromosome is n! [4]. The example of the
m−1
two-part individual representation is shown as follows (Fig. 2): Fourthly, the approach process of the new TWPS algorithm is
By the help of the two-part individual representation technique, defined as the following steps: Every wolf gets close to the best wolf
the original WPS algorithm can be redefined as a TWPS algorithm. GBest by a rate Step. The approach rate Step means repeating the
In the following part, 6 basic steps will be shown by an example following operation for Step times in a generation. Let Step be 1. This
which has 8 cities and 3 salesmen. specific approach operation is divided into two parts: the approach
Firstly, after the initialization step, there are N wolves which of the first part and the second part. In the first part of the i-th wolf,
are one-dimensional arraies satisfing the structure of the two-part an element {3} is chosen randomly to seek its position in the best
individual representation technique. wolf GBest. Its order in i-th wolf is 6, which is bigger than its order in
Step 1: The initialization of the wolf pack. the best wolf GBest, so the approach of the first part means changing
the order <1, 3, 8> into <3, 1, 8>. Subsequently, in order to finish the
approach of the second part, the numbers of cities assigned to the
corresponding salesmen are classified into 3 groups (bigger, smaller
and equal) compared with the corresponding elements in the best
wolf GBest. The approach way is to add 1 to a random element
in the smaller group and to subtract 1 from a random element in
the bigger group. Just like the example, the equal group, which is
Secondly, the cost function of each wolf is computed and sorted.
without operation, is {2} for the first salesman. The smaller group
We can obtain a small elite wolf group with N’ wolves.
is {2} for the third salesman and the bigger group is {4} for the
Step 2: The evaluation of the cost functions of all wolves.
second salesman. So the original sequence changes from <2, 4, 2>
to <2, 3, 3> by adding <0, −1, 1>.
Step 4.a: The approach of the first part of each wolf.

Thirdly, we will define the safari process in the TWPS algorithm.


The elites are chosen based on the obtained cost functions. These
elites make a small local optimization around themselves. The local
search scale R is defined as 1, which means that only 1R = 1 element
in the first part and 2R = 2 elements in the second part need to be Step 4.b: The approach of the second part of each wolf.
operated. When R is bigger than 1, which is defined as that this
step is repeated for R times. Then, the safari process of the elites
are defined as the following two steps: a. As shown in figure, we
randomly select an element {4} from the first part of i-th wolf. Two
new wolves will be generated by changing the order between {4}
and its two adjacent elements {6} and {1}, then the cost functions
of these three wolves will be computed and sorted. The best one
will replace the original one; b. Then, two elements in the second
part are chosen randomly to add <1, −1> and <−1, 1> to them. If
the original element is 1 or n-m + 1, they only need to add <1, −1>
or <−1, 1> instead of both of them. Suppose the number of cities
visited by each of the m salesman is represented by (x1 , x2 , . . .,
xm ). The second parts of two new obtained wolves must meet: x1 +
718 Y. Chen et al. / Applied Soft Computing 61 (2017) 714–725

Fifthly, the cost functions of all new wolves are computed and
compared with the original ones. Next, the original best wolf GBest
and the last N* old weak wolves are replaced respectively by a new
wolf wolfnew whose cost function f(wolfnew ) is better than f(GBest)
and N* new random wolves.
Finally, repeat step 2 to step 5 until the ending conditions are
met.

4. The discussion for the TWPS algorithm

The optimal capability of the normal WPS algorithm has been


Fig. 3. An example of the infinite loop in the MTWPS algorithm.
discussed in some references [29] and [30] by some test functions.
These test functions are the classical benchmark functions which
include some different characteristics: unimodal, multimodal, reg- target elements of two transposed elements all locate in the same
ular, irregular, separable, nonseparable and multidimensional. The side of their original positions, these two elements may go into an
results show that the WPS algorithm has good convergence and infinite loop. Of course, the probability of keeping the infinite loop
robustness for these functions. It is well known that some integer (1/2)g tends to 0 when the generation g increases. If the infinite loop
programming problems, such as the TSP and the MTSP, are much never stops, the maximum convergence generation of the first part
more difficult than the interval optimization problem of the contin- of the wolf will be an infinitude.
uous functions, because of more stringent conditions, much higher For example, the first part of an initial wolf is <4, 1, 2, 5, 3> and
dimensions and discreteness. Therefore, for the MTSP, it is a bigger the first part of the optimal wolf is <1, 2, 3, 4, 5>. The elements <1,
challenge for the original WPS algorithm and TWPS algorithm. The 2> of the initial wolf may go into the infinite loop, because their
numerical experiment results of the TWPS algorithm will be com- corresponding elements in the best wolf GBest are both smaller
pleted in Section 6. We will find that the TWPS algorithm is not a than their initial orders 2 and 3. When they alternately approach
good approach to solve the MTSP. The lack of diversity of the wolf their own corresponding elements, it is an obvious infinite loop.
pack in the updated process is the main reason leading to the bad This example is shown in Fig. 3.
results in the WPS algorithm. There are two main points that will Then, we can assume that, in the whole updated process,
greatly affect the diversity of the wolf pack. We will discuss them the above special circumstance doesn’t exist, which means that
in this part. every two transposed elements only exchange their positions for
once, then the maximum time of this approach operation will
4.1. The discussion of the convergence generation of the TWPS be n(n−1)/2. It can be obtained by a recurrence sequence. Let an
algorithm be the maximum convergence generation of the first part of the
wolf with n cities. The recurrence relation between an and an-1 is
The first one is the convergence generation of the optimization an = an-1 + n − 1, so, by the help of the superposition method, an is
algorithm. If the convergence generation of a discrete optimiza- n(n−1)/2. Considering there are Step operations in a generation, the
tion algorithm is too small, named the premature problem, which maximum convergence generation will be n(n−1)/(2Step).
means converging too fast and is also the common problem of the Finally, the appearances of the special circumstance are everp-
other algorithms, the diversity of the wolf pack will be decided resent and uncertain. In the worst situation, it can be assumed that
by the convergence generation rather than the maximum termi- the special circumstance continuously happen k times in every two
nal generation. On the contrary, the diversity of the wolf pack will transposed elements in the updated process. Then, the maximum
be decided by the maximum terminal generation. The maximum convergence generation of the first part of the wolf will be kn(n-
terminal generation is set artificially, so we need to find a way to 1)/(2Step). At the same time, the probability of breaking out the loop
obtain the convergence generation of the TWPS algorithm. less than k times will be 1−(1/2)k . For the whole updated process,
In the TWPS algorithm, step 3 and step 4 are the only two opera- the probability of this worst situation will be (1−(1/2)k )n(n−1)/2 .
tions which can change the order of the elements to search forward. So, in conclusion, Proposition 1 is proved.
What is more, as step 3 is dependent on the cost functions, it can- In order to further verify its correctness, a numerical simulation
not be discussed broken away from the problems. At the same time, about this remark is shown as follow:
step 3 is the local searcher, which means that it is not the key search Two n-dimension random row vectors meeting all conditions
operation to determine the search direction in this method. Hence, of the first part of the wolf are updated by step 4. One represents
we only discuss the effect of step 4. Further, although GBest may be the first part of the best wolf and the other one is the first part of
changed for many times in the optimization process, the conver- the initial wolf. Because the updated process is stochastic, step 4 is
gence generation is mainly decided by the approach rate between repeated 100,000 times. Then, the convergence generations of the
two wolves based on step 4. Because the whole updated process first part of the initial wolf are recorded and shown in Fig. 4, where
is full of uncertainty, we can only estimate the maximum termi- the graticule lines stand for the numerical simulation results and
nal generation of convergence from an initial wolf to an optimal the round lines represent the results obtained by Proposition 1.
wolf only by step 4 and make some numerical simulations in some In the results, we can see that the proportions, whose conver-
typical situations. gence generations are smaller than kn(n-1)/(2Step), become bigger
Proposition 1. Suppose that the optimal wolf is permanent and and verge to 1 with the growing of k. At the same time, each numer-
the initial wolf is chosen randomly, then the convergence genera- ical simulation result is bigger than its corresponding theoretical
tion of the first part of the wolf will be smaller than kn(n−1)/(2Step) value (1−(1/2)k )n(n−1)/2 , which verifies the validity of Proposition
with a probability over (1−(1/2)k )n(n−1)/2 , where k is the accommo- 1.
dation coefficient which can improve the accuracy of the estimation Proposition 2. Suppose that the optimal wolf is permanent and
method, and meets k ∈ N+ . the initial wolf is chosen randomly, then the convergence genera-
Proof of Proposition 1: Firstly, we need to discuss a special cir- tion of the second part of the wolf will be the maximal difference
cumstance in the updated process. Based on Step 4.a, when the value of the corresponding elements.
Y. Chen et al. / Applied Soft Computing 61 (2017) 714–725 719

Fig. 5. The typical case of the limitation of the WPS algorithm.

Proof of Proposition 3: We can put forward some counter-


examples to prove this proposition. Because step 3 is a local
searcher which greatly depends on the problem and the character-
istics of populations, we only consider step 4, which mainly decides
the updated process of the WPS algorithm. For example, there are
only two wolves in the initial populations: GBest is <1, 2, 3, 4, 5, 6,
7, 3, 4> and the other one is <3, 2, 1, 7, 6, 5, 4, 4, 3>. The first parts
of them respectively are <1, 2, 3, 4, 5, 6, 7> and <3, 2, 1, 7, 6, 5, 4>,
Fig. 4. The numerical simulation for Proposition 1. and the rest two elements are their second parts. This example is a
typical case. It is shown in Fig. 5.
Based on step 4.b, this proposition is true obviously. For exam- As shown in the figure, it is easy to see that the search space of
ple, the second part of an initial wolf is <1, 2, 3> and the ones of wolfi is limited by GBest. The elements cannot break through the
optimal wolf is <2, 2, 2>. Then, the maximal difference of the cor- dotted portion. For the first part, the wolves, such as <4, 5, 6, 7, 1, 2,
responding elements is |1-2| = |3-2| = 1. Apparently, based on step 3, 3, 4> and <1, 2, 3, 4, 5, 6, 7, 6, 1>, are impossible to be visited based
4.b, the convergence generation of the second part of an initial wolf on step 4. For the second part, there will be more wolves missing in
is 1 too. the updated process. The search space of the elements of the second
Generally, because the convergence generation of the second part of the wolf wolfi is respectively limited in (3, 6) and (1, 4). Using
part is smaller than n, it is much smaller than the ones of the the existing method, the wolves, such as <1, 2, 3, 4, 5, 6, 7, 1, 6> and
first part (n ≤ kn(n−1)/(2Step), if n > 2). The premature problem is <3, 1, 2, 7, 6, 5, 4, 2, 5>, are lost in the search space. The diversity of
mainly decided by the first part. In other words, the convergence the solution space is severely limited by the TWPS algorithm. So, in
generation of the whole wolf pack using the WPS is decided by short, the TWPS algorithm does not own the global reachability. In
kn(n-1)/(2Step) largely. Meanwhile, it also means that there are conclusion, Proposition 3 is proved. As a result, we need to modify
only kNn(n−1)/(2Step) different wolves in the whole process at the TWPS algorithm to improve its diversity in Section 5.
most. It is very easy to see that kNn(n−1)/(2Step) is not a very big

number 
comparing with the solution space for the two-part wolf 5. The modified wolf pack search algorithm with the
n−1 transposition and extension operation
n! . Of course, in most situations, the number of the dif-
m−1
ferent wolves is greatly smaller than the theoretical value even. So 5.1. A transposition and extension operation
in order to improve the diversity of the wolf pack, the appropriate
relationship between the approach rate Step and maximum termi- Simply, the main bad effect of the above two points is the lack
nal generation is very significant. We will discuss this problem by of diversity in the wolf pack. There are some common measures
the simulations further in Section 6.1.1. that are used to curb this difficulty, such as the increase of popula-
tion quantity N and the introduction of the new random wolves in
4.2. The weakness analysis of the WPS algorithm step 5. Of course, the increase of population quantity N is not a good
measure, because it will greatly slow down the computation speed,
The second factor is the reachability problem. First of all, let us which would be like trying to put out a burning cartload of faggots
define the reachability of an intelligent optimization algorithm. with a cup of water facing so huge solution space. In the early itera-
tions, the new random wolves can help to find some potential good
Definition 1. For an intelligent optimization algorithm, if any wolves compared with the early population with a considerable
solution in the solution space can be obtained by the finite oper- probability. But with the improvement of the quality of the whole
ations of any group of initial populations without considering the wolf pack, a random generated wolf without the updated process
different optimization problem and the new random complemen- is generally much inferior to the whole population, which will lead
tary populations, the intelligent optimization algorithm is defined to its discard. So these measures cannot overcome these problems
that has the global reachability. fundamentally.
A good intelligent optimization algorithm should not lose any In this section, we propose a new transposition and extension
feasible solutions in its solution space, because these missed fea- (TE) operation, which is a preferable modified way for the TWPS
sible solutions may include the optimal/suboptimal solution. If an algorithm, in step 5 of the TWPS algorithm to solve the MTSP.
algorithm does not have the global reachability, in some situations, For each wolf, the transposition operation is used in the first part
it will never obtain the optimal solution. Even though the decrease of the wolf, which is shown in Fig. 6. It is to obtain a new wolf wolf’i
of the search space may accelerate the search, it could greatly affect by changing the order of elements between two random elements
the optimization solution and make the results greatly depend on <2, 6, 4, 1, 3> into <3, 1, 4, 6, 2>. Then the cost functions of the old
the initial populations and the different optimization problems. So wolf wolfi and the new wolf wolf’I are computed and compared to
the global reachability is a significant point to judge the quality of choose the better one. Finally, the better one will replace the old
the intelligent optimization algorithm. wolf wolfi . This simple operation can easily break through the lim-
itation of Fig. 5 to enhance the diversity of the first part of the wolf.
Proposition 3. The TWPS algorithm does not have the global At the same time, this operation can easily untie the intersectant
reachability. route and maintain the tour structure in the symmetrical MTSP.
720 Y. Chen et al. / Applied Soft Computing 61 (2017) 714–725

6. Computational results

In this section, some computational results about the MTWPS


algorithm for the MTSP are presented to verify its effectiveness.
Except the parameters of the MTSP problem, some algorithm
parameters are shown: the population quantity N = 100, the elitism
quantity N’ = 20, the number of weak wolves N* = 20, the approach
rate Step = 1, the local search scale R = 1 and

6.1. Comparision

The first group of the test problems named as MTSPn are


Euclidean, two-dimensional symmetric problems with n = 51, 100,
150 cities and m = 3, 5, 10 salesmen, respectively. Similar to
Ref. [11], the coordinates of the cities are selected and trans-
formed from a standard collection of TSPs from the Library of
Fig. 6. The transposition operation of the MTWPS algorithm. Travelling Salesman Problems <http://comopt.ifi.uni-heidelberg.
de/software/TSPLIB95/> [11]. For each test problem, the home city
is the first city in the list. In these simulations, all the salesmen
must start at the same home city and return to the same home city.
Because of the uncertainty, each problem runs 30 times, and then
its statistical data are presented [31]. In order to correctly show the
performance of this method, the experiments are conducted with
respect to two different cost functions. The stopping criterion of the
TWPS algorithm and the MTWPS algorithm: the maximum terminal
generation Gmax = 20000. The stopping criterions of the contrastive
GA with TCX method presented in the reference [11]: the max-
imum terminal generations 50000 (MTSP51), 100000 (MTSP100)
and 200000 (MTSP150). The reason why the stopping criterions of
these three algorithms are different will be shown in Section 6.3.

6.1.1. Simulation for minimizing total travel distance


The first cost function is to minimize the total travel distance
of all salesmen. This objective reflects the goal of minimizing the
distance required to visit all n cities. The only constraint used with
this objective is that each salesman must visit at least one city (other
than their home city). Without this constraint, the GA could reduce
the number of salesmen in the problem, hence possibly reducing
the MTSP to a TSP. Because each salesman must start and return to
the home city, the total travel distance of the combined trips also
Fig. 7. The extension operation of the MTWPS algorithm.
tends to increase with the growth of the number of salesmen.
In addition, for every method, each of the starting populations
and the new random wolves in step 5 seed with a solution produced
by a simple greedy heuristic in order to obtain a good starting point.
For the second part of each wolf, an extension operation is intro-
The greedy seeding for the “minimize the total distance” problems
duced to greatly enlarge the search space, which is shown in Fig. 7
is similar to Carter and Ragsdale [10]. The first non-home cities of
as an example. Firstly, a set A = {2, 3, 4} which means the limita-
the salesmen in these seeding are stochastic, and then the greedy
tion of the second part is generated by a random element {4} of a
solutions are generated based on the present location of all the
wolf wolfi and its corresponding element {2} of the best wolf GBest.
salesmen to find the unassigned city that is the closest to one of the
In this example, the universal set U in the second part based on
salesmen. The closest unassigned city is then assigned to the closest
the constraint of MTSP is {1, 2, 3, 4, 5}. Subsequently, the comple-
salesman. This process is continued until all cities are assigned.
mentary set CU A = {1, 5}, which means the lost solution space, is
Based on these parameters and environment, the computational
obtained by A and U. Then, an element {5} in CU A is selected out
results, including the average cost function (Avg), standard devia-
randomly to create a new second part for the new wolf wolf’i . The
tion (SD) and the best cost function (best), for three algorithms (GA
other elements in the second part are generated randomly as well
with TCX [11], TWPS algorithm and MTWPS algorithm) are shown
as meeting the constraint x1 + x2 +. . . + xm = n,. Finally, the new wolf
as follows:
wolf’i . is compared with the original wolf wolfi in order to replace
In Table 1, we can see that the TWPS algorithm is not a good
it by the better one.
algorithm to solve the MTSP. For the large-sized MTSP, its large
solution spaces are lost during the updated process. With the pos-
itive improvements of the TE operation, the MTWPS algorithm
5.2. The modified two-part WPS algorithm shows its obvious advantages compared with the TWPS algorithm
and the GA with TCX overall, even though the maximum terminal
With the help of the TE operation, the novel TWPS algorithm is generations of the MTWPS algorithm are much smaller than the
further modified to solve the MTSP, named modified two-part WPS ones of the GA with TCX.
(MTWPS) algorithm. The whole MTWPS algorithm is presented by In fact, in Table 1, we can find that the advantages of the MTWPS
a pseudo code in Fig. 8. are not very considerable for the large-sized MTSP when the maxi-
Y. Chen et al. / Applied Soft Computing 61 (2017) 714–725 721

Fig. 8. The pseudo code of the MTWPS algorithm.

Table 1
The computational results of three algorithms.

Data set m TCX Avg TWPS Avg Best SD MTWPS Avg Best SD

3 492 548 547 2 479 465 10


MTSP51 5 519 631 608 33 503 499 4
10 670 828 810 19 627 602 21

3 26130 27460 26135 1013 22878 21879 542


MTSP100 5 28912 33182 30995 1319 25526 24169 914
10 30988 42414 40893 1094 30751 29897 757

3 44686 51290 50102 1104 43658 42104 1353


MTSP150 5 47811 55411 52928 2112 46122 45311 575
10 51326 69097 67297 3066 49207 47709 797

mum terminal generation Gmax is set as 20000. Let us make a small kn(n − 1)/(2Step) = 189975, where Step = 1, in the worst situation. Of
discussion about this phenomenon. The theoretical lower bounds course, even though, in common situations, the convergence gen-
of the convergence probability of the wolf in the above 3 MTSPs erations are much smaller than 189975, the maximum terminal
growing with the accommodation coefficient k are shown in Fig. 9. generations in our simulations, which are set as 20000, are insuf-
According to Proposition 1, for the MTSP150, if we demand that ficient seriously. What is more, the above result is obtained based
the theoretical lower bound of the convergence probability of the on the assumption that the optimal wolf is permanent. In the real
wolf (1−(1/2)k )n(n−1)/2 > 90%, the convergence generations will be updated process, the optimal wolf is also constantly updated. So
722 Y. Chen et al. / Applied Soft Computing 61 (2017) 714–725

In Table 2, it is easy to find that, with the positive help of the


TE operation, the MTWPS algorithm obtains a better result over-
all compared with the ones of the other two algorithms under
the unfavorable stopping criterion. But the average result for the
problem MTSP100 (m = 10) is worse than that of the GA with TCX
operation. The greatest improvement compared with the TWPS
algorithm is achieved at the MTSP150(m = 10) with 41.18%. Obvi-
ously, the simulation results for minimizing the longest tour have
the similar maximum terminal generation problem which has been
discussed in the above section. So if the maximum terminal gener-
ations of the simulations and the approach rate Step increase, the
results will be better for the large-sized MTSP.

Fig. 9. The convergence probability growing with the accommodation coefficient k. 6.2. Statistical hypothesis testing

Since GA with TCX, TWPS algorithm and MTWPS algorithm


belong to the class of stochastic search algorithms, the statistical
in short, in order to guarantee the convergence of the TWPS and
hypothesis tests are very important to show different perfor-
MTWPS algorithm, the maximum terminal generations of the sim-
mance between the three algorithms. Our experiments for the
ulations and the approach rate Step both need to be increased for
three approaches are conducted using 30 independent trials. There
the large-sized MTSP. Because the large simulation generations are
are two common statistical methods for two independent sam-
time-consuming, we only run 5 times for the MTSP150 with n = 3.
ples rank sum tests: t-test and Mann-Whitney U test. The t-test
The maximum terminal generations and the approach rate Step of
is any statistical hypothesis test in which the test data follow a
our simulations are respectively set as 30000 and 3. The result of
Student’s t-distribution under the null hypothesis. So firstly we
the average cost function, the standard deviation and the best cost
need to confirm whether the results satisfy the normal distribu-
function respectively decrease to 41106, 420 and 40773.
tion assumption. The results of the GA with TCX and the MTWPS
algorithm for problem MTSP51 (m = 3) are tested by using the
6.1.2. Simulation for minimizing longest tour one sample Kolmogorov-Smirnov test function of SPSS20.0. Even
In the real world, most MTSP applications are commonly more though we find that the significance is bigger than 0.05, their values
interested in minimizing the longest tour of the individual sales- are not big enough to make me believe that they follow the nor-
man, which helps to balance the workloads of different salesmen mal distribution. Hence, we employ statistical hypothesis testing
and to minimize the makespan. The objective of minimizing the using the Mann-Whitney U test. In this paper, the null hypothesis
total travel distance always leads to a result that one salesman uses can be stated such that: “MTWPS does provide same solution qual-
a long time to finish most of cities and the others only visit one or ity when applied to 30 trials”. Statistical differences are indicated
several cities. Apparently, this result may greatly reduce the work according to the Mann-Whitney U test with a level of significance
efficiency and the longevity of the whole system. So in this part, at sig = 0.05 (95% confidence). If the statistical tests resulting in a
the cost function is changed to minimize the longest tour of the significance level of (sig > 0.05) do not provide enough statistical
individual salesman. By the help of this objective, the cost function evidence to refute or confirm the null hypothesis, it will indicate
values decrease as the number of salesmen increase. that the performance among three algorithms is similar. The results
Similar to the previous simulation, for every method, each of the of the Mann-Whitney U value and the significance level obtained
starting populations and the new random wolves are also seeded by SPSS 20.0 are shown in Table 3.
by a simple greedy heuristic. Similarly, the first non-home cities of In Table 3, we can see that most significance level values are less
the salesmen in these seeding are stochastic, and then the greedy than 0.05, which means to refute the null hypothesis. When the
solutions for “minimize the longest tour” problems are determined maximum terminal generation Gmax increases to 30000, we can
by rotating through all of the salespeople in a round-robin fashion, get that all significance level values will be less than 0.05. Hence,
assigning the closest unassigned city to each salesman in turn, and we have the following result: “the MTWPS does provide signifi-
continuing on until all the cities are assigned [10]. cantly different solution quality when applied to 30 trials”. In other
Then, the computational results, including the average cost words, the solutions found by the MTWPS algorithm are statisti-
function (Avg), standard deviation (SD) and the best cost function cally significantly better than the solutions found by the other two
(best), for three algorithms, GA with TCX [11], TWPS algorithm and methods (TCX and TWPS) for all cases (MTSP-51, MTSP-100 and
MTWPS algorithm are presented in Table 2. MTPS-150) (Table 9).

Table 2
The computational results of the three algorithms.

Data set m TCX Avg TWPS Avg Best SD MTWPS Avg Best SD

3 203 213 205 15 181 178 6


MTSP51 5 154 167 151 11 142 131 8
10 113 132 131 1 112 112 0

3 12726 12101 11926 321 10311 10122 193


MTSP100 5 10086 9290 9190 444 8858 8434 318
10 7064 8016 7814 350 7126 6795 347

3 18019 19420 18236 1367 17535 16755 551


MTSP150 5 12619 15712 14726 842 11354 10317 851
10 8054 13641 12417 774 8024 7823 345
Y. Chen et al. / Applied Soft Computing 61 (2017) 714–725 723

Table 3
The Mann-Whitney results of the three algorithms.

Problem Comparison m=3 m=5 m = 10

MTWPS vs. TCX 632(0.007) 894(0.000) 900(0.000)


MTSP50
MTWPS vs. TWPS 889(0.000) 900(0.000) 900(0.000)
Minimizing total travel MTWPS vs. TCX 900(0.000) 900(0.000) 480(0.623)
MTSP100
distance MTWPS vs. TWPS 900(0.000) 900(0.000) 900(0.000)
MTWPS vs. TCX 608(0.019) 894(0.000) 867(0.000)
MTSP150
MTWPS vs. TWPS 900(0.000) 900(0.000) 900(0.000)

MTWPS vs. TCX 867(0.000) 721(0.000) 525(0.021)


MTSP50
MTWPS vs. TWPS 900(0.000) 900(0.000) 900(0.000)
Minimizing longest MTWPS vs. TCX 900(0.000) 882(0.000) 375(0.267)
MTSP100
tour MTWPS vs. TWPS 900(0.000) 900(0.000) 900(0.000)
MTWPS vs. TCX 693(0.000) 897(0.000) 529(0.245)
MTSP150
MTWPS vs. TWPS 900(0.000) 900(0.000) 900(0.000)

Table 4
Computer system used for the experimental test.

Computer CPU RAM OS IDE

Self-assembly Computer Intel(R) Core(TM) i7–4790 K CPU @ 4.0 GHz 32.00GB DDR3 1600 MHz Windows 7 professional 64 bits MATLAB R2014a

Table 5
Average simulation time of the MTWPS algorithm.

Problem m=3 m=5 m = 10

MTSP51 1008s 1428s 2125s


Minimizing total travel
MTSP100 2598s 3897s 6549s
distance
MTSP150 5407s 8197s 14684s

MTSP51 1216s 1670s 2977s


Minimizing longest
MTSP100 3279s 5161s 10022s
tour
MTSP150 6682s 10990s 21972s

6.3. Time complexity Analysis

In this part, we will analyze the time complexity of our MTWPS


algorithm. The time complexity depends upon multiple factors, Fig. 10. The best cost functions of the three methods in the random maps.
including the computational complexity of the algorithm, the com-
plexity level of the optimization problem, the performance of the Table 6
simulation computer, the programming language and so on. The The average cost functions of the MTWPS algorithm for different population sizes.

simulations of our MTWPS algorithm are coded by MATLAB and Population Size N MTSP51 MTSP100 MTSP150
finished by the following computer systems shown in Table 4.
50 631 31120 50313
Under these conditions, the average simulation time of the 100 627 30751 49207
MTWPS algorithm for all cases (MTSP-51, MTSP-100 and MTPS- 500 614 30150 48519
150) is shown in the following Table 5. 1000 610 29901 48131
In Table 5, we can see that the longest simulation time of these
cases is 21972s. In the reference [11], their all programs were imple-
travel distance of all the salesmen. Then, the best cost functions of
mented in C++. It is well known that, for the same algorithm, its
the three methods are shown in Fig. 10.
CPU time using the MATLAB program is tens times longer than
the time using the C++ program usually. So the maximum terminal
6.4.2. Simulation parameters
generations of the CTX algorithm can be set as 50000, 100000 and
Except the simulation map, the parameter settings are very sig-
200,000 in an acceptable CPU time. If our terminal generations are
nificant for the quality and performance of the MTWPS algorithm.
set as these parameters, 30 trials will cost too much time. So our
So, the robustness and sensitivity of the MTWPS parameters is ana-
maximum terminal generations Gmax are set as 20,000.
lyzed in this part.
The first important parameter is the population size of the wolf
6.4. Robustness pack N. In Table 6, the average cost functions of the MTWPS algo-
rithm with the objective of minimizing the total travel distance for
6.4.1. Random maps the MTSP51, MTSP100 and MTSP150 problems (m = 10) are shown
The above simulations are all finished by some classical data. with the increase of the population size.
Only finishing these problems is not sufficient to show that the It is easy to understand that the mean values of the solutions
MTWPS algorithm has better performance than the other two decrease with the increase of the population size N, but meanwhile
methods, so the positions of cities are randomly generated in a their computation time will increase.
1000 × 1000 map. Of course, in one simulation, the maps of the The other one is the approach rate Step. In Table 7, the influence
three algorithms are same. In these simulations, the number of of the approach rate Step for the MTWPS algorithm with the objec-
salesmen m is set as 3. The numbers of cities are respectively set tive of minimizing total travel distance for the MTSP51, MTSP100
as 51, 100 and 150. The cost function is set as the minimum total and MTSP150 problems (m = 10) is shown. It is easy to find that,
724 Y. Chen et al. / Applied Soft Computing 61 (2017) 714–725

Table 7 7. Conclusion
The robustness and sensitivity of the MTWPS algorithm for the approach rate.

Approach Rate Avg Best This paper firstly introduces a novel TWPS algorithm based on
Step the original WPS algorithm in order to solve the MTSP. Secondly, the
MTSP51 MTSP100 MTSP150 MTSP51 MTSP100 MTSP150 convergence performance of the TWPS algorithm is analyzed and
discussed to help to understand the reasonability of the maximum
1 627 30751 49207 602 29897 47709
2 620 30897 48303 602 29897 47129 terminal generation. Then, based on the definition and discussion
3 631 32021 46724 621 31534 46260 of the global reachability of the initial population in the TWPS algo-
4 630 32317 45862 622 31601 45500 rithm, the TE operation is proposed to break through this weakness
5 664 37211 45606 652 36807 45180 and it can greatly enhance the search ability of the TWPS algorithm,
named the MTWPS algorithm. Finally, some experimental results,
including the comparison, the robustness and the optimality gap,
for the small-sized problem, such as MTSP51 problem, the solution
focused on the objective of minimizing total travel distance and
quality remains the same when the approach rate Step is small and
the objective of minimizing the longest tour are shown that the
it becomes worse when the approach rate Step is bigger than 2. Its
MTWPS algorithm can produce higher solution quality than the
main causes are that for the small-sized problem the maximum ter-
TWPS algorithm and the GA with TCX operator.
minal generation 20000 is big enough to end the updated process
Our future work will focus on two main areas. Firstly, we would
of the algorithm, and meanwhile the too big approach rate Step will
like to finish the convergence analysis of the MTWPS algorithm
lead to the miss of some potential optimal or sub-optimal points.
by the Markov Chain. Secondly, since many practical problems can
So, for the small-sized problem, the approach rate Step had better
be cast into the MTSP or the MTSPTW, we will apply the MTWPS
be set as 1 or 2. However, for the large-sized problem, especially
algorithm for solving a large-sized MTSPTW problem in the UAV
MTSP150 problem, the solution quality significantly improves with
mission planning problem.
the increase of the approach rate Step. The reason has been dis-
cussed in Section 6.1.1.
Acknowledgment
6.5. Optimality gap
We would like to thank A.P. Shoudong Huang (UTS), Dr. M.S.
As a meta-heuristic algorithm, the MTWPS algorithm cannot Yuan for providing us with data sets and useful information.
guarantee to find the optimal solutions of the symmetric MTSP.
But it is a good evaluation way to judge whether a meta-heuristic
algorithm is good enough to solve the complex NP-hard problem, References
because it is hard to image that an algorithm, which cannot solve
[1] H. Qu, Z. Yi, H.J. Tang, A columnar competitive model for solving
the small-sized problem but can solve the large-sized problem
multi-traveling salesman problem, Chaos Solitons Fractals 31 (4) (2007)
effectively. So the MTSP is used to solve several small-sized MTSP 1009–1019.
problems including 4 small-sized problems in random maps and [2] K. Chang, A genetic algorithm based heuristic for the design of pick-up and
5 classical small-sized problems named 11a, 11b, 12a, 12b and 16 delivery routes for strategic alliance in express delivery services, 7th IFAC
Conference on Manufacturing Modelling, Management, and Control (2013)
in Ref. [11]. The optimal solutions of the five classical problems 1938–1943.
have been presented in Ref. [11]. The optimal solutions of the other [3] L.X. Tang, J.Y. Liu, A.Y. Rong, Z.H. Yang, A multiple traveling salesman problem
4 problems are obtained by the exhaustion technique. In these 4 model for hot rolling scheduling in Shanghai Baoshan Iron & Steel Complex,
Eur. J. Oper. Res. 124 (2000) 267–282.
simulations with the random maps, the numbers of the salesmen [4] J.M. Charles, A genetic algorithm for service level based vehicle scheduling,
m are all set as 3. These problems with n cities are named as mTSPrn. Eur. J. Oper. Res. 93 (1996) 121–134.
Then, the simulation results are shown as follows: [5] X.B. Wang, C.R. Amelia, Local truckload pickup and delivery with hard time
window constraints, Transp. Res. Part B 36 (2002) 97–112.
The results presented in Table 8 show that the MTWPS algorithm [6] K.H. Kim, Y.M. Park, A crane scheduling method for port container terminals,
is able to find optimal solutions for a set of different small-sized Eur. J. Oper. Res. 156 (2004) 752–768.
MTSPs. And further it is inferred that it can at least find a sub- [7] A. Mehdi, K. Yoshiaki, P.H. Jonathan, Coordination and control of multiple
UAVs with timing constraints and loitering, Proceedings of the American
optimal solution to other large-sized MTSPs.
Control Conference (ACC) (2003) 5311–5316.
[8] E. Lanah, I.B. Ana, M. Herman, W. Albert, Online stochastic UAV mission
planning with time windows and time-sensitive targets, Eur. J. Oper. Res. 238
Table 8
(2014) 348–362.
The simulation results of the small-sized problem with the known optimal solutions. [9] S. Tal, J.R. Steven, G.S. Andrew, UAV Cooperative Multiple Task Assignments
Using Genetic Algorithms, American Control Conference (ACC), 2005, pp.
Data set optimal MTWPS
2989–2994.
Avg Best SD [10] E.C. Arthur, T.R. Cliff, A new approach to solving the multiple traveling
salesperson problem using genetic algorithms, Eur. J. Oper. Res. 175 (2006)
Minmax 246–257.
11a 77 77 77 0 [11] S. Yuan, S. Bradley, S.D. Huang, D.K. Liu, A new crossover approach for solving
11b 73 73 73 0 the multiple travelling salesmen problem using genetic algorithms, Eur. J.
12a 77 77 77 0 Oper. Res. 228 (2013) 72–82.
12b 983 983 983 0 [12] G. Laporte, Y. Nobert, A cutting planes algorithm for the m-salesmen problem,
16 94 94 94 0 J. Oper. Res. Soc. 31 (1980) 1017–1023.
mTSPr10 537 537 537 0 [13] S. Gorenstein, Printing press scheduling for multi-edition periodicals,
Manage. Sci. 16 (6) (1970) 373–383.
mTSPr11 1593 1593 1593 0
[14] O. Paul, R. Sivakumar, D. Swaroop, A Transformation for a Multiple Depot,
Minsum Multiple Traveling Salesman Problem, American Control Conference (ACC),
11a 198 198 198 0 2009, pp. 2636–2641.
11b 135 135 135 0 [15] B. Tolga, The multiple traveling salesman problem: an overview of
12a 199 199 199 0 formulations and solution procedures, Omega 34 (2006) 209–219.
[16] B. Gavish, K. Srikanth, An optimal solution method for large-scale multiple
12b 2295 2295 2295 0
traveling salesman problems, Oper. Res. 34 (5) (1986) 698–717.
16 242 242 242 0
[17] S. Saad, W. Nurhadani, W. Jaafar, S.J. Jamil, Solving standard traveling
mTSPr10 5062 5062 5062 0
salesman problem and multiple traveling salesman problem by using branch
mTSPr11 5039 5039 5039 0 and bound, Aip Conference Proceedings (ACP) (2013) 1406–1411.
Y. Chen et al. / Applied Soft Computing 61 (2017) 714–725 725

[18] L. Kota, K. Jarmai, Mathematical modeling of multiple tour multiple traveling [26] M. Soylu, N.E. Ozdemirel, S. Kayaligil, A self-organizing neural network
salesman problem using evolutionary programming, Appl. Math. Modell. 39 approach for the single AGV routing problem, Eur. J. Oper. Res. 121 (1) (2000)
(12) (2015) 3410–3433. 124–137.
[19] P. Wang, S. Cesar, S. Edward, Evolutionary algorithm and decisional DNA for [27] Y.B. Park, A hybrid genetic algorithm for the vehicle scheduling problem with
multiple travelling salesman problem, Neurocomputing 150 (2015) 50–57. due times and time deadlines, Int. J. Prod. Econ. 73 (2001) 175–188.
[20] S. Belhaiza, P. Hansen, G. Laporte, A hybrid variable neighborhood tabu search [28] K. András, A. János, Optimization of multiple traveling salesmen problem by a
heuristic for the vehicle routing problem with multiple time windows, novel representation based genetic algorithm, Intell. Comput. Optim. Eng. 366
Comput. Oper. Res. 52 (2014) 269–281. (2011) 241–269.
[21] C.H. Song, K. Lee, W.D. Lee, Extended simulated annealing for augmented TSP [29] C.G. Yang, X.Y. Tu, J. Chen, Algorithm of marriage in honey bees optimization
and multi-salesmen TSP, Proceedings of the International Joint Conference on based on the wolf pack search, International Conference on Intelligent
Neural Networks (2003) 2340–2343. Pervasive Computing (2007) 462–467.
[22] J. Li, M.C. Zhou, Q.R. Sun, X.Z. Dai, X.L. Yu, Colored traveling salesman problem, [30] H.S. Wu, F.M. Zhang, Wolf pack algorithm for unconstrained global
IEEE Trans. Cybern. 45 (11) (2015) 2390–2401. optimization, Math. Prob. Eng. 465082 (2014) 1–17.
[23] K. Elad, C. Kelly, K. Manish, A market-based solution to the multiple traveling [31] S. Garcia, D. Molina, M. Lozano, F. Herrera, A study on the use of
salesmen problem, J. Intell. Robot. Syst. 72 (2013) 21–40. non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a
[24] S. Wu, T.W.S. Chow, Self-organizing and self-evolving neurons: a new neural case study on the CEC’2005 Special Session on Real Parameter Optimization, J.
network for optimization, IEEE Trans. Neural Netw. 18 (2) (2007) 385–396. Heuristics 15 (2009) 617–644.
[25] T.A.S. Masutti, L.N.D. Castro, A clustering approach based on artificial neural
networks to solve routing problems, 11th IEEE International Conference on
Computational Science and Engineering (2008) 285–292.