You are on page 1of 5

Mar/d Compur. Modelling, Vol. 13, No. 3, pp. 67-71, 1990 0895-7177/90 $3.00 + 0.

00
Printed in Great Britain. All rights reserved Copyright 0 1990 Pergamon Press plc

AN EFFICIENT PROCEDURE FOR OBTAINING


FEASIBLE SOLUTIONS TO THE n-CITY
TRAVELING SALESMAN PROBLEM

B. R. FEIRING
Department of Information Systems and Decision Sciences, College of Business Administration,
University of South Florida, Tampa, FL 33620, U.S.A.

Abstract-Simple procedures for the traveling salesman problem are presented. The procedures are
developed from a new solution generation scheme, which successively exchanges the rows and columns
of the matrix of distances. At each stage of the computation, the cost and the tour are uniquely determined.

1. INTRODUCTION

The traveling salesman problem (TSP) states that given a set of cities, find a minimum-cost tour,
provided that the cost of a trip between any pair of cities is known. On this tour, a salesman must
visit every city exactly once, and return to the starting city at the end so that his track of the tour
forms one circuit.
The TSP is one of the classic and standard problems in the fields of operations research and
management science [l-3]. It has a variety of applications in practice, since it is a prototype of “hard
combinatorial” optimization. Applications include circuit layout, vehicle routing, production
management and job sequencing [ 1,4].
The TSP is a problem that is easy to state but difficult to solve, although the solution to the
problem always exists. (It is one of the infamous N-P hard or N-P complete problems depending
on whether the problem is symmetric or asymmetric [5-71.)
For the n-city TSP, there are exactly (n - l)! possible tours to be examined and one of these
will provide the cheapest cost. The most obvious approach to obtain a solution is to generate all
(n - l)! feasible tours and select the one with the lowest cost. Consider a four-city problem.
Suppose a salesman lives in New York (N) and is required to visit Chicago (C), Minneapolis (M)
and Boston (B). He takes an airline and the fare between any two cities is known. There are only
six (3!) possible round trips for him:
N+C+M+B+N,
N-+C-+B+M+N,
N+B+C+M+N,
N+B+M+C+N,
N+M+B+C+N
or
N+M+C+B+N.
It is only necessary to calculate the total fare of each round trip and choose the trip with the
cheapest cost.
The approach becomes computationally infeasible as the number of cities increases, even by just
one or two. As seen in Table 1, the number of feasible tours grows very rapidly as the number
of cities increases. Therefore, the combinatorial approach of generating permutations and
calculating the corresponding costs fail to solve large-scale TSPs.
The TSP is equivalent to a general zero-one integer programming problem with subtour
elimination constraints [8]. The difficulties of this approach are: (1) the number of constraints which
eliminate the subtours grows very rapidly; and (2) the formulation of the constraints is difficult.
67
68 B. R. FEIRING

Table I
No. of cities TOWS

4 6
5 24
6 120
I 720
8 5040
9 40,320
IO 362,880
15 8.7lE + IO
20 1.2lE+ I7
30 8.8E + 30
40 2.03E + 46
50 2.03E + 46
100 6.08E + 62

A k-city tour, where k is less than the size of the problem, is called a subtour. Since a
salesman is required to visit every city once and only once, no k-city tours with k < n can be
allowed.
The solution generation scheme described in this paper does not permit subtours. This provides
several advantages that will be described subsequently. All computations were performed on the
Apple II Plus microcomputer with 48K.

2. NOTATION

Generally, in graph-theoretic terms, the TSP is that of finding a minimum weight Hamiltonian
cycle (or circuit) in a weighted, undirected graph G = (V, E), where V is the set of nodes (cities)
and E is the set of arcs. For simplicity, the cities are denoted by the integers 1,2,3, . . . , n - 1, n
(n-city TSP), and the following notation is introduced:
G = (V, E) is a directed, weighted graph,
I’={1,2,3 ,..., n - l,n} is a set of n cities,
E = {(i,j): i E V,j E V} is the set of all possible trips between each pair of cities,
W = [w(i, j): (i, j) E E] is the n x n cost (weight) matrix with elements w(i, j),

Tk = (0,, ~2,033 . . . , vk) is a k-city tour, where v, is the jth node, k < n, where vi # Uj,V i, j,
and
W’,) = C(v,, 02, ~3,. . . 3 ok)

= w(v,, VI) + w(v2, v3) +. . . + w(v,k, VI).

3. A SURVEY OF SOLUTION METHODOLOGIES

Since a feasible tour for the TSP is a permutation of the elements of V [a cyclic permutation
of the order n - 1, i.e. (n - l)! possible feasible tours], and since the solution to this problem always
exists, the combinatorial approaches to obtain the optimal solution may be classified into three
types: (1) tour generations, (a) TA, Tl, Ty . . . (tour-to-tour improvement), (b) T,, T2, T,, . . . , T,
(tour-building); and (2) the subtour elimination method using mixed integer programming of the
assignment problem with subtour elimination constraints [9].
Approach (la) gives rather efficient algorithms which can handle large-scale TSPs but does not
guarantee the optimal solution since these algorithms are an approximation to total enumeration.
Much research has been dedicated to finding a way to modify and measure closeness [l, 10-141,
perhaps the most important been Ref. [15]. Approach (lb) is based on the exhaustive search
technique, using typical algorithms [ 10, 16, 171and dynamic programming (DP) [ 161.The algorithm
of Little [16], 1963, is still considered by several researchers to be the best available. DP, when
applied to the TSP, does not reduce the computational time as it frequently does in other
applications. Furthermore, even for small problems, the amount of the storage is a bottleneck [ 181.
Feasible solutions to the n-city TSP 69

The last approach (2) has difficulties similar to DP, since the number of constraints and variables
for the formulation of the subtour elimination are enormous and the complexity of the formulation
involves enormous computational effort.

4. THE DEVELOPMENT OF THE SOLUTION GENERATION SCHEME

The cost of a tour is always determined by the summation of the superdiagonal elements, i.e.
the elements immediately above the main diagonal of the weight matrix, and to decrease the cost
the rows and the columns of the weight matrix are successively permuted. [It should be noted that
~(1, n) is assumed to be on the superdiagonal for convenience.]
The following definitions are required:

B(P) = w(1, i,) + w(2, i2) + W(3, ij) +. . . + w(n, in),

where P is the permutation of the integers from 1 through n, i.e.

P = (i,, iz, i,, . . . , in_ ,, in).

Then the cost of the tour T (I, 2,3, . . . , n - 1, n) is calculated by B(P), where P is
(n, 1,2,3, . . . , n - 1). It is clear that the number of permutations performed with the weight matrix
is (n - l)!.
Using B(P), the sequences of the modification can be expressed as

B(T,) = ~(1, n) + ~(2, 1) + w(3,2) +. . . + w(n, n - l),

B(T;) = ~‘(1, n) + ~‘(2, I) + w’(3,2) +. . s+ w’(n, n - l),

B(T;) = ~“(1, n) + ~“(2, 1) + w”(3,2) +. . . + w”(n, n - I),

where w’(i,j)s are the elements of the modified weight matrix.


Note that for any B, if w’ are redefined as w in the modified cost matrix, then C(T,) is

C(T,) = C(l, 2,3,4,. . . , n - 1,n)

= B(T,,).

This indicates that the tour generated successively is always feasible.


Furthermore, in group theory, notation B corresponds to the symmetric group representation.
Then this solution generation scheme allows one to use the weight matrix to represent the tour itself.

5. TOUR-TO-TOUR IMPROVEMENT ALGORITHM

Take the ith column and the jth column to be interchanged to obtain the new cost. Let the
original tour, T,, and the modified tour, T;, be

T,={l,2,3 ,..., i-l,i,i+l,..., j-l,j,j+l,..., n}

and

T:,={l,2,3 ,..., i-l,j,i+l,..., j-l,i,j+l,..., n},

and the ordered pair representation of T,, be 0,.


The indices of the elements of the cost matrix, W, which come from the set 0, that reduce the
current cost of T,, are

S, = {(i - 1, i), (i, i + l), (j - 1, j), (j, j + l)}.

Those elements which enter and increase the cost are

s2 = {(i - l,j), (i,j + l), (j - 1, Mj, i + 1)},

which is true if i + 1 <j, i.e. the interchange is not adjacent.


70 B. R. FEIRING

For the adjacent case, j = i + 1,


s, = {(i - 1, i), (i, i + l), (i + 1, i + 2))
and
s2 = {(i - 1, i + l), (i, i + 2), (i + 1, i)}.
The cost of the tour is varied by the amount w(i,j) for s, and s2. That is,
C(Ti) = C(T,) - 1 w(i,j) + C w(i,j).
WeIs, OYer
s2
Outline of the algorithm
For the current tour T, calculate
D(i,j) = C w(i, j) + 2 w(i, j).
overs, overs*
If there exists a negative D(i, j), the exchange of the ith and jth improves the cost:
Step 0. Set Y < 0, go to Step 2.
Step 1. If Y > 0, then go to Step 2.
Else stop: no improvement.
Step 2. Evaluate D(i, j).
Non-adjacent case: i from 2 to n - 2
j from i + 2 to n.
Adjacent case: i from 2 to n - 1
j=i+l.
Set Y = the largest D(i, j).
Exchange the ith column and the jth column.
Exchange the ith row and the jth row.
Output C(T,,) and T,.
Go to Step 1.

6. EXHAUSTIVE SEARCH ALGORITHM

The algorithm computes the lower bound of the branch-and-bound method. At any city, the
algorithm gives the partial cost of the tour and the entire cost of the trip.

Outline of the algorithm


The driver algorithm is the backtracking and the nearest-neighbor methods. The procedure
works as if the salesman were traveling immediately above the diagonal elements. He is moving
up and down diagonally, exchanging the column which has the least positive value at his current
row including his current column value. After the row-column reduction (partially performed) is
made, and if the lower bound of the rest of the tour is less than the residual (the current optimal
value of the tour minus the partial cost of the tour he already traveled), he keeps going down.
Otherwise, i.e. if the lower bound is bigger, then he goes back by one step and tries to travel to
the next nearest city. At the same time he compares the current trip-cost (= B(T)) to the current
optimal value.
Step 0. Set the pointer to 1, C, = 0,
where C, is the partial cost of the tour.
Step 1. Advance the pointer.
Find the nearest city not yet traveled.
Evaluate the C,.
Compare C, to the current minimum value.
If C, > the current minimum value, then calculate the lower bound of the
rest of the tour.
Else go to Step 2.
Feasible solutions to the n-city TSP 71

Step 2. Backup the pointer.


If the pointer = 1, then stop.
Find the nearest city which is not yet traveled.
Evaluate C, .
Compare to the current minimum value.
If C, > the current minimum value, then go to Step 1;
Otherwise repeat Step 2.

7. SUMMARY

The algorithm presented in this paper has the following advantages:


(1) The storage requirements for the computer are small, i.e. n x n for the weight
matrix and n for the record of the current tour.
(2) The procedure is simple. It is not difficult to improve the efficiency.
(3) The algorithm can handle problems on a mainframe or a microcomputer.
The current program can solve problems whose size is up to 50 cities without any limitation on
a microcomputer with 48K, and the number of cities is easily increased with the addition of
memory. Thus, this procedure provides an efficient heuristic that can be utilized to solve such TSPs
as vehicle scheduling, circuit card assembly layout and job sequencing.

REFERENCES

1. G. B. Dantzig, D. R. Fulkerson and S. M. Johnson, Solutions of a large-scale traveling salesman problem. Ops Res.
2, 393410 (1954).
2. M. M. Flood, The traveling salesman problem. Ops Res. 4, 61-75 (1956).
3. C. Gary, Graphs as Mathematical Models. Prindle, Webber & Schmidt, (1978).
4. M. Rothkopf, The traveling salesman problem: on the reduction of certain large problems to smaller ones. Ops Res.
(in press).
5. S. S. Anderson, Graphs Theory and Finite Combinations. Markham, Chicago, III. (1970).
6. B. Carre, Graphs and Networks. Clarendon Press, Oxford (1979).
7. M. C. Golumbic, Algorithmic Graph Theory and Perfect Graphs, Academic Press, New York (1980).
8. G. B. Dantzig, D. R. Fulkerson and S. M. Johnson, On a linear-programming, combinatorial approach to the traveling
salesman problem. Ops Res. 7, 5866 (1959).
9. M. Bellmore and G. L. Nemhauser, The traveling salesman problem: a survey. Ops Res. 16, 5388558 (1968).
10. G. A. Croes, A method for solving the traveling salesman problem. Ops Res. 6, 791-812 (1958).
1I. H. Crowder and M. W. Padberg, Solving large-scale symmetric traveling salesman problems to optimality. Mgmt Sci.
26(5), 495-508 (1980).
12. M. F. Decay, Selection of an initial solution for the traveling salesman problem. Ops Res. 8, 133-134 (1960).
13. M. L. Fisher, G. L. Nemhauser and L. A. Woolsey, An analysis of approximations for finding a maximum weight
Hamiltonian. Ops Res. 27, 799-809 (1979).
14. D. J. Rozenkrantz, R. E. Stearns and P. M. Lewis, An analysis of several heuristics for the traveling salesman problem.
SIAM JI Compur. 6, 563-581 (1977).
15. S. Lin and B. W. Kernighan, An effective heuristic algorithm for the traveling salesman problem. Ops Res. 21,498-516
(1973).
16. E. L. Lawler and D. E. Wood, Branch-and-bound method: a survey. Ops Res. 14, 699-719 (1966).
17. J. D. C. Little, D. W. Sweeny and C. Karel, An algorithm for the traveling salesman problem. Ops Res. 11, 972-981
(1963).
18. M. Held and R. M. Karp, The traveling salesman problem and minimum spanning trees. Ops Res. 18,1138-l 162 (1970).

You might also like