This action might not be possible to undo. Are you sure you want to continue?

BooksAudiobooksComicsSheet Music### Categories

### Categories

### Categories

Editors' Picks Books

Hand-picked favorites from

our editors

our editors

Editors' Picks Audiobooks

Hand-picked favorites from

our editors

our editors

Editors' Picks Comics

Hand-picked favorites from

our editors

our editors

Editors' Picks Sheet Music

Hand-picked favorites from

our editors

our editors

Top Books

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Audiobooks

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Comics

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Sheet Music

What's trending, bestsellers,

award-winners & more

award-winners & more

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

Manufactured in The Netherlands.

**Expanding Neighborhood GRASP for the Traveling Salesman Problem
**

YANNIS MARINAKIS marinakis@ergasya.tuc.gr ATHANASIOS MIGDALAS sakis@verenike.ergasya.tuc.gr Decision Support Systems Laboratory, Department of Production Engineering and Management, Technical University of Crete, 73100 Chania, Greece PANOS M. PARDALOS Department of Industrial and Systems Engineering, University of Florida, USA Received June 17, 2003; Revised May 28, 2004; Accepted October 11, 2004 pardalos@cao.ise.uﬂ.edu

Abstract. In this paper, we present the application of a modiﬁed version of the well known Greedy Randomized Adaptive Search Procedure (GRASP) to the TSP. The proposed GRASP algorithm has two phases: In the ﬁrst phase the algorithm ﬁnds an initial solution of the problem and in the second phase a local search procedure is utilized for the improvement of the initial solution. The local search procedure employs two different local search strategies based on 2-opt and 3-opt methods. The algorithm was tested on numerous benchmark problems from TSPLIB. The results were very satisfactory and for the majority of the instances the results were equal to the best known solution. The algorithm is also compared to the algorithms presented and tested in the DIMACS Implementation Challenge that was organized by David Johnson [18]. Keywords: Traveling Salesman Problem, Greedy Randomized Adaptive Search Procedure, local search, Meta-Heuristics

1.

Introduction

Consider a salesman who has to visit n cities. The Traveling Salesman Problem (TSP) asks for the shortest tour through all the cities such that no city is visited twice and the salesman returns at the end of the tour back to the starting city. We speak of a symmetric TSP, if for all pairs i, j the distance ci j is equal to the distance c ji . Otherwise, we speak of the asymetric traveling salesman problem. If the cities can be represented as points in the plain such that ci j is the Euclidean distance between point i and point j, then the corresponding TSP is called the Euclidean TSP. Euclidean TSP obeys in particular the triangle inequality ci j ≤ cik + ck j for all i, j, k. The Traveling Salesman Problem (TSP) is one of the most famous hard combinatorial optimization problems. Since 1950s many algorithms have been proposed, developed and tested for the solution of the problem. The TSP belongs to the class of NP-hard optimization problems [19]. This means that no polynomial time algorithm is known for its solution. Algorithms for solving the TSP may be divided into two classes, exact algorithms and heuristic algorithms.

232

MARINAKIS, MIGDALAS AND PARDALOS

The exact algorithms are guaranteed to ﬁnd the optimal solution in exponential number of steps. The most effective exact algorithms are branch and cut algorithms [21] with which large TSP instances have been solved [2]. The problem with these algorithms is that they are quite complex and are very demanding of computer power [14]. For this reason it is very difﬁcult to ﬁnd optimal solution for the TSP, especially for problems with very large number of cities. Therefore, there is a great need for powerful heuristics that ﬁnd good suboptimal solutions in reasonable amounts of computing time. These algorithms are usually very simple and have short running times. In the 1960s, 1970s and 1980s the attempts to solve the traveling salesman problem focused on tour construction methods and tour improvement methods. The proposed modiﬁed GRASP essentially combines these two approaches into a new algorithm. Construction methods build up a tour step by step. One of the simplest methods is the nearest neighbor in which, a salesman starts from an arbitrary city and goes to its nearest neighbor. Then, he proceeds from there in the same manner. There has been proposed a number of construction algorithms for the solution of the traveling salesman problem, including insertion heuristics, the Christoﬁdes algorithm, which is based on spanning trees in the underlying graph, and cost saving algorithms. The problem with construction heuristics is that although they are usually fast, they do not, in general, produce very good solutions. The improvement methods start with a tour and try to transform it into a shorter tour. The most known of these algorithms is the 2-opt heuristic, in which two edges are deleted and the open ends are connected in a different way in order to obtain another tour. In the general case, r edges in a feasible tour are exchanged for r edges not in that solution as long as the result remains a tour and the length of that tour is less than the length of the previous tour. The most powerful, and for many years the algorithm which performed best among all heuristic, is the Lin–Kernighan algorithm [24]. It decides dynamically at each iteration what the value of r (the number of edges to exchange) should be [14, 27]. In the last ﬁfteen years, metaheuristics [1], such as simulated annealing, tabu search, genetic algorithms and neural networks, were introduced. These algorithms have the ability to ﬁnd their way out of local optima. In simulated annealing [31], this is achieved by allowing the length of the tour even to increase with a certain probability. Gradually the probability allowing the objective function value to increase is lowered until no more transformations are possible. Tabu search [31] uses a different technique to get out of local optima. The algorithm keeps a list of forbidden transformations. In this way, it may be necessary to use a transformation that deteriorates the objective function value in the next step. Genetic algorithms [4, 31] mimic the evolution process in nature. Their basic operation is the mating of two tours in order to form a new tour. Moreover, they use algorithmic analogs to mutation and selection. Although there are few papers applying neural networks [26] algorithms to the TSP, the results obtained are not competitive with the mentioned heuristics [17]. In this paper, we use a modiﬁed version of the well known Greedy Randomized Adaptive Search Procedure (GRASP) for the solution of the TSP. It should be noted that in the DIMACS Implementation Challenge [12] which was organized by David Johnson, Fred Glover and Cesar Rego and presented in [18], no GRASP algorithm for the solution of the TSP is mentioned.

In the second phase. The advantage of the GRASP compared to other heuristics is that there are only two parameters to tune (the size of the candidate list and the number of GRASP iterations). 32] but also global optimization problems with combinatorial neighborhoods. This randomized technique provides a feasible solution within each iteration. including problems in scheduling. location. Compared to tabu search. such as the Capacitated Network Flow Problem (NCFP) [15. before proceeding to the next iteration. Of course. 16] and the Quadratic Assignment Problem (QAP) [28. Each iteration consists of two phases. and the fact that is easier to implement and tune. The probabilistic component of a GRASP is characterized by randomly choosing one of the best candidate in the list but not necessary the top candidate. Resende and Ribeiro [32] . in each iteration. phase one is used to generate a starting point. GRASP appears to be competitive with respect to the quality of the produced solutions.EXPANDING NEIGHBORHOOD GRASP 233 GRASP [7. The heuristic is adaptive because the beneﬁts associated with every element are updated at each iteration of the construction phase to reﬂect the changes brought on by the selection of the previous element. This phase can be described as stepwise. 32] is an iterative two phase search which has gained considerable popularity in combinatorial optimization. the sequential approach is thought of as an iterative process. a local search is initialized from these points. 16. 15. The choice of the next element to be added is determined by ordering all elements in a candidate list with respect to a greedy function. This solution is then exposed for improvement attempts in the local search phase. The greedy algorithm is a simple one pass procedure for solving the traveling salesman problem. assignment. That is. then the local search of phase two is applied. the efﬁciency. and the ﬁnal result is simply the best solution found over all searches (multi-start local search). 8. adding one element at a time to the partial (incomplete) solution. That is. Typically. a randomized greedy technique provides feasible solutions incorporating both greedy and random characteristics. a randomized greedy function is used to build up an initial solution. In the construction phase. 29]. in the ﬁrst phase. 16. simulated annealing and genetic algorithms. GRASP has been used extensively to solve difﬁcult combinatorial optimization problems. The ﬁnal result is simply the best solution found over all iterations. a construction phase and a local search procedure. transportation [7. different problems require different construction and local search strategies. The GRASP algorithm may be described by the pseudo code below: algorithm GRASP do while stopping criteria not satisﬁed call GREEDY RANDOM SOLUTION(Solution) call LOCAL SEARCH(Solution) if Solution is better than Best Solution Found then Best Solution Found ← Solution endif enddo return Best Solution Found The neighborhood mapping used in the local search phase of the GRASP must also be deﬁned. logic.

j). a restricted 2-opt method is applied to improve the solution of the ﬁrst phase and. concluding remarks and extensions are given in the last section. as well as other difﬁcult problems. The most signiﬁcant of them concern the way the restricted candidate list (RCL) is constructed in the ﬁrst phase and the strategy used for exploiting the neighborhood during the local search in the second phase. Expanding neighborhood GRASP In the previous section. E) call Delete Min From List((i. Here. an RCL parameter. determines the level of greediness or randomness in the construction. In the Section 2. a single local search algorithm has been used for the improvement of the initial solution. ﬁnally. subsequently a restricted 3-opt method is tried in order to improve the 2-opt solution. the algorithm is explained step by step. The new modiﬁed version of GRASP. and has not frequently used in the literature. In the Section 3. then.234 MARINAKIS. E) Add l = (i.j). Subsequently. 2. The structure of the paper is as follows. In most of the implementations of GRASP. algorithm GRASP do while stopping criteria not satisﬁed S = ∅ ! S is the current solution !E = number of arcs call Make Queue(E) call Init Set(parent. ﬁrst a more detailed pseudocode of the proposed algorithm is presented and. In this section. the computational results are presented and. some type of value based RCL construction scheme has been used. MIGDALAS AND PARDALOS present a recent survey of GRASP and Festa and Resende [8] present an extensive annotated bibliography on GRASP. has a few differences from the original algorithm. one of them is chosen randomly to be the next candidate for inclusion to the tour. In most algorithms for the solution of the Traveling Salesman Problem. an analytical description of the proposed modiﬁed Greedy Randomized Adaptive Search Procedure heuristic is presented. This type of RCL is called a cardinality based RCL construction scheme. j) to RCL k=k+1 enddo m=0 . implemented in this paper for the solution of the Traveling Salesman Problem (TSP). In such a scheme. the Greedy Randomized Adaptive Search Procedure was examined and its generic framework was presented. number of nodes) k=0 do while k < D ! D is the size of the RCL call Select Min From List((i. In our implementation the parameter α was not used and the best promising candidate edges are selected to create the RCL. it is proposed the use of a combination of two well known local search methods instead of a single local search. First. α.

l) ∪ ( j. k) ∪ (i. k) call Calc Cost(S . l) call Add to List( j.root2. j) randomly from RCL root1 = Collapsing Find Set(i. S) call Union(root1.k) and (m. k) S = S \ (i. j) \ (l. E 1 ) !let (l. k) S = S \ (i. j) call Delete from List(l. E 1 ) !let (l. cost S ) if (cost S < cost S ) then Update the solution S ← S . j).EXPANDING NEIGHBORHOOD GRASP 235 do while m < n − 1 ! where n is the number of nodes Select l = (i.parent) Add the (D + m)ith edge to the RCL endif enddo call Calc Cost(S. cost S) ! 2-opt Given S ! where S is the current solution !E 1 = number of arcs of the initial solution call Make Queue(E 1 ) for i = 1 to n call Select Max From List((i. k) \ (m. cost S ) if (cost S < cost S ) then Update the solution S ← S cost S = cost S call Delete from List(i. k) call Add to List(i.parent) root2 = Collapsing Find Set(j.parent) if r oot1 = r oot2 then m=m+1 call Merge Subtours((i. k) endif enddo endfor ! 3-opt Given S ! where S is the current solution !E 1 = number of arcs of the initial solution call Make Queue(E 1 ) for i = 1 to n call Select Max From List((i. l) ∪ ( j. j). k) an edge of the current tour do for all possible (l. j). k) call Calc Cost(S . n) ∪ (i. m) ∪ (n.n) two edges of the current tour do for all possible (l. j) \ (l.

The algorithmic representation of the complete graph was achieved by a list of arcs. The representation of the sets is done using an integer array parent. we do not move items. MIGDALAS AND PARDALOS cost S = cost S call Delete from List(i. If I is the root of the tree. In this data structure. Any element I has its left child in position 2I . where parent(I ) is the parent of the element I in the tree where it belongs. The elements of the sets are integers. The function Collapsing Find Set returns the root of the set in which the element belongs. The choice of the data structure is a very signiﬁcant part of the algorithm as it plays a critical role for the efﬁciency of the algorithm. k) endif enddo endfor if Solution S is better than Best Solution Found then Best Solution Found ← Solution S endif enddo ! GRASP iteration completed return Best Solution Found In each iteration of GRASP a data structure for tour representation is needed. m) call Add to List(n. The heap is implemented as an integer array of pointers to an array of items. The root is in position 1 of the binary tree. The assumption is that the children of the root are already roots of subtrees that are in heap order. one for the starting nodes and another for the ending nodes.236 MARINAKIS. . The siftup/siftdown operations are used in adjusting the heap. j) call Delete from List(l. n) call Add to List(i. giving. The functions Select Min From List and Delete Min From List are used for removing the root of the heap and in adjusting it. used a third vector for the cost of each arc. we rather permute the pointers. The function Init Set initializes disjoint sets. We use the heap and the disjoint set data structures [33] in the construction phase. by making the smallest set a subtree of the other set. Finally. and its right child in position 2I + 1. where root1 and root2 are values returned by function Collapsing Find Set. where each set has one element (node). At the same time. It also uses an integer to keep the heap size. this function collapses the tree in order to reduce the longest path in the tree. That is. This vector contains the Euclidean distances of all nodes. The heap is a complete binary tree represented as an array. Finally. thus. a better complexity for a sequence of searches. the function Union joins two disjoint sets with root1 and root2. The disjoint sets data structure implements tree representation of pairwise disjoint sets. It. Of course. parent is in the position INT(I /2). k) call Delete from List(m. the root has no parent. then parent(I ) is less or equal to 0. and the total number of sets is equal to number of nodes. l) call Add to List( j. also. With the function Make Queue an array is converted into a heap. it used 2 vectors for the arcs.

k). ∀y ∈ N (x). • If S consists of a single city k. namely 2-opt and 3-opt. j} and replacing them with {i. j. The procedure of merging is achieved with the function Merge Subtour s. for its insertion in partial tour. the algorithm examines all the solutions that are closely related to this solution and ﬁnds a neighboring solution of lower cost. This type of RCL is called cardinality based RCL. Local search for the TSP is synonymous to k-opt moves. k and l be cities such that {i. If S consists of a single city k then the merged tour is TOUR(S. In the stage two of the construction phase. . the tours S. when applied to a TSP of n nodes. The candidate edge for inclusion in the tour is selected randomly from the RCL using a random number generator. after the choice of an element for inclusion in the tour. S that should next be merged are chosen so that min{ci j : i ∈ S and j ∈ S } is as small as possible. constructs a sequence S1 . The merged tour is then obtained by deleting {i. A solution x is called a local minimum of g with respect to the neighborhood N if g(x) ≤ g(y). After the completion of the construction phase. Sn such that each Si is a set of n − i + 1 disjoint subtours covering all the nodes. k} and { j. Kruskal’s algorithm. j} is an edge of S then cik + ck j − ci j is minimized. The improvement is achieved with the implementation of two different local search algorithms. l}. and successively merges the tours until a single tour containing all the cities is obtained. Initially.E) is created by ordering all the edges from the smallest to the largest cost using a heap data structure. each consisting of a single city. If the current number of tours exceeds one. . the RCL is readjusted in every iteration by replacing the edge which has been included in the tour by another edge that does not belong to the RCL. k } and { j. Given a feasible solution. The merged tour is then obtained by deleting {i. the ﬁrst D. k}. g) can be deﬁned as a mapping from S to its powerset: N : S → 2 S . It starts with n partial tours. In the proposed algorithm. . Finally. a combined neighborhood search is used. . let i. j} is an edge of S and {k. as it was presented above. If {i. j} is an edge of S then cik + ck j − ci j is minimized. l} and replacing them with {i. if one exists. k }. the merged tour is TOUR(S . From this list. j} and {k. If {i. A local search algorithm [25] is built around a neighborhood search procedure. The merged tour is then obtained by deleting {i. namely the (D + m)ith edge where m is the number of the current iteration. where D can vary from 30 to 150. A neighborhood N for the problem instance (S. l} is an edge of S and cik + c jl − ci j − ckl is minimized. N (s) is called the neighborhood of s. • If both S and S contain at least two cities. neighboring solutions can be obtained by deleting k edges from the current tour and reconnected the resulting paths using k new edges. The merging procedure have three possible alternatives (ﬁgure 1): • If both S and S consist of a single node then these two single node tours are merged in a tour. j} and replacing them with {i. k ). are selected in order to form the Restricted Candidate List. it is used a modiﬁed version of Kruskal’s algorithm. . the initial solution of the algorithm is exposed for improvement. the nearest merger algorithm [22]. Using k-opt moves. k} and { j. a list of all the edges of a given graph G = (V.EXPANDING NEIGHBORHOOD GRASP 237 In stage one of the construction phase it is created a list called Restricted Candidate List (RCL) that contains the best candidate edges for inclusion in the tour.

where S is the initial solution.238 MARINAKIS. Initially the edges in the current tour are ordered from largest to smallest costs. k) are the candidate for inclusion edges (ﬁgure 2). The heap is then updated with the deletion of the edges (i. The idea of using a larger neighborhood to escape from a local minimum in a smaller one had been proposed initially by Garﬁnkel and Nemhauser [9] and recently by Hansen and Mladenovic [13]. the edge with the largest cost is selected. the neighborhood function is deﬁned as exchanging two edges of the current solution with two other edges. but in a more sophisticated way than the one used in the classical 2-opt. k). Note that there is only one way to reconnect the paths. The cost of the candidate tour is. j) and (l. (i. then. j) \ (l. The constructed neighborhood is then S = S \ (i. compared with the cost of the current tour and the ﬁrst best tour found is used in order to replace the incumbent tour. Let i and j be the end-nodes of the edge and let l and k be the end-nodes of another candidate edge for deletion of the tour. l) ∪ ( j. k) ∪ (i. The neighborhood to be examined is constructed from the end-nodes of this edge. First. Example of modiﬁed Kruskall. MIGDALAS AND PARDALOS Figure 1. l) and ( j. l) and ( j. using a heap data structure. This algorithm does not use the classical approach of 2-opt but a restricted one. and from this order. k) and the addition of the edges (i. k) are the candidate for deletion edges and (i. The process is . This procedure is known as 2-opt procedure and was introduced by Lin (1965) for the TSP [23]. k). j) and (l.

n be the end-nodes of two other candidate edges for deletion of the tour. The neighborhood to be examined is constructed from the end-nodes of this edge. k) and (m. it introduces more ﬂexibility in modifying the current tour. and from this order. but in a more sophisticated way than the one used in the classical 3-opt. The constructed neighborhood is then S = S\(i. l). k) and . k). The overall best solution is kept. This procedure does not exclude a node of the current candidate for deletion edge to be the same as in a previous iteration. Checking whether an improving 2–opt move exists takes O(n 2 ) time. the algorithm has now the possibility to escape from it and ﬁnd a better solution. using a heap data structure.and 3-opt. No polynomial worst case bound on the number of iterations to reach a local optimum can be given. l) ∪ ( j. then. the algorithm tries to improve the solution by expanding the neighborhood using a restricted 3-opt procedure. because it uses a larger neighborhood. k)\(m. Initially the edges in the current tour are ordered from largest to smallest costs. The heap is then updated with the deletion of the edges (i. (l. repeated with the new tour using the edge with the largest cost from the updated heap. Examples of 2. k) are the candidate for inclusion edges (ﬁgure 2). Subsequently. The process terminates when a number of candidate for deletion edges is examined. (l. The cost of the candidate tour is. compared with the cost of the current tour and the ﬁrst best tour found is used in order to replace the incumbent tour. ( j. the edge with the largest cost is selected. (i. Let i and j be the end-nodes of the edge and let l. There are eight ways to connect the resulting three paths in order to form a tour. k and m. m) and (n. There are ( n ) ways to remove three edges from a tour. j). The 3-opt heuristic (ﬁgure 2) is quite similar the 2-opt. it can only be guaranteed that an improving move decreases the tour length by at least one unit. The tour breaks into three parts instead of only two. In the worst case. if 3 2-opt has been trapped in a local optimum. Thus. n) ∪ (i. However. j)\(l.EXPANDING NEIGHBORHOOD GRASP 239 Figure 2. where S is the initial solution. j). n) are the candidate for deletion edges and (i. This number was selected to be equal to the number of nodes. m) ∪ (n.

32 0.68 0.90 0.80 0.88 1.33 0.16 1.34 0.17 0.37 1.03 0 0 0 0 0 0 0.85 1.60 0.09 0.11 .18 1.15 0 0.28 0.05 0.04 0.07 0. MIGDALAS AND PARDALOS Comparison between the solution of the modiﬁed GRASP and the best known solution.71 1.07 2. MARINAKIS.29 0 0.38 0.05 0.03 1.26 1. Instance Eil51 Berlin52 Eil76 Pr76 Rat99 KroA100 KroB100 KroC100 KroD100 KroE100 Rd100 Eil101 Lin105 Pr107 Pr124 Bier127 Ch130 Pr136 Pr144 Ch150 KroA150 Pr152 Rat195 D198 KroA200 KroB200 Ts225 Pr226 Gil262 Pr264 A280 Pr299 Rd400 Fl417 Pr439 Pcb442 D493 Rat575 P654 D657 Rat783 Pr1002 Pcb1173 D1291 Rl1304 Rl1323 Fl1400 Fl1577 Rl1889 D2103 Pr2392 Nodes 51 52 76 76 99 100 100 100 100 100 100 101 105 107 124 127 130 136 144 150 150 152 195 198 200 200 225 226 262 264 280 299 400 417 439 442 493 575 654 657 783 1002 1173 1291 1304 1323 1400 1577 1889 2103 2392 Best solution with GRASP 426 7542 538 108159 1211 21282 22141 20749 21294 22068 7910 629 14379 44303 59030 118326 6110 96772 58537 6528 26524 73682 2331 15788 29380 29482 126643 80414 2385 49135 2589 48235 15385 11895 107401 50946 35253 6863 34707 49531 8897 262060 57676 51616 255185 273115 20310 22427 319250 81312 386017 Best known solution 426 7542 538 108159 1211 21282 22141 20749 21294 22068 7910 629 14379 44303 59030 118282 6110 96772 58537 6528 26524 73682 2323 15780 29368 29437 126643 80369 2378 49135 2579 48191 15281 11861 107217 50778 35002 6773 34643 48912 8806 259045 56892 50801 252948 270199 20127 22249 316536 80450 378032 Quality (%) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.240 Table 1.

65 20.65 19.25 8.65 Average improvement 2-opt vs 3-opt (%) 2.03 14.25 14.78 10.38 23.29 20.39 16.73 7.20 12.42 9.67 11.08 9.32 7.33 11.70 30.93 19.81 28.40 9.81 38.54 21.50 24.41 21.76 11.86 23.76 15.67 19.29 7.80 21.50 7. Average improvement Kruskall vs 2-opt (%) 6.09 25.13 11.27 9.43 16.39 15.38 10.06 18.62 10.77 8.17 26.23 10.60 22.83 19. Improvement of the solution from the ﬁrst phase by the 2-opt and 3-opt algorithms in the second phase.82 5.49 7.55 19.15 7.68 10.37 14.04 8.03 24.30 22.67 26.34 20.12 4.4 22.26 15.47 8.00 15.08 7.08 19.70 8.97 Instance Eil51 Berlin52 Eil76 Pr76 Rat99 KroA100 KroB100 KroC100 KroD100 KroE100 Rd100 Eil101 Lin105 Pr107 Pr124 Bier127 Ch130 Pr136 Pr144 Ch150 KroA150 Pr152 Rat195 D198 KroA200 KroB200 Ts225 Pr226 Gil262 Pr264 A280 Pr299 Rd400 Fl417 Pr439 Pcb442 D493 Rat575 P654 D657 Rat783 Pr1002 Pcb1173 D1291 Rl1304 Rl1323 Fl1400 Fl1577 Rl1889 D2103 Pr2392 Nodes 51 52 76 76 99 100 100 100 100 100 100 101 105 107 124 127 130 136 144 150 150 152 195 198 200 200 225 226 262 264 280 299 400 417 439 442 493 575 654 657 783 1002 1173 1291 1304 1323 1400 1577 1889 2103 2392 Kruskall 469 11007 770 151873 1728 29764 30287 30770 29764 30369 10912 884 21112 56587 84078 154734 8545 143527 87281 9320 37786 97398 3498 19700 40165 41667 202356 112729 3308 71794 3811 68071 21378 15598 153667 74886 46089 9587 45809 67962 12623 357617 85155 78895 388155 418267 26468 33000 478565 142727 552671 2-opt 438 8545 582 119622 1262 23235 24516 23094 22965 24337 8508 671 15764 49836 65369 132037 6852 102003 61162 7045 29397 84183 2546 16383 32315 32622 140775 94360 2616 55517 2807 54251 16618 13172 123147 56921 38196 7500 38908 55180 9890 287881 64569 58253 305154 319161 22320 26508 378945 87563 438540 3-opt 426 7542 538 108159 1211 21282 22141 20749 21294 22068 7910 629 14379 44303 59030 118326 6110 96772 58537 6528 26524 73682 2331 15788 29380 29482 126643 80414 2385 49135 2589 48235 15385 11895 107401 50941 35253 6863 34707 49531 8897 262060 57676 51616 255185 273115 20310 22427 319250 81312 386017 .6 20.36 24.92 24.95 23.96 10.93 22.93 29.75 7.91 22.96 21.49 9.16 21.05 24.41 9.20 16.EXPANDING NEIGHBORHOOD GRASP 241 Table 2.69 12.10 9.56 9.78 11.44 3.04 8.98 17.86 22.02 6.12 21.83 11.39 10.73 11.58 4.13 27.69 10.65 19.33 9.77 12.69 15.23 26.63 9.

The algorithm was tested on a set of 51 Euclidean sample problems with sizes ranging from 51 to 2392 nodes. again. ( j.242 Table 3. The length of RCL varies from 30 to 150. the process terminates when a number of candidate for deletion edges is examined and this number was selected. Each GRASP iteration concludes in a tour. As before. k). The number of iterations is between 10 and 100 depending on the number of nodes of each instance. running Suse Linux 6. MIGDALAS AND PARDALOS Analytical presentation of solutions for four instances. e. l). 3.de/ groups/compt/software/TSPLIB95). In all tables.g. For instances with a number of nodes over 1173 the algorithm was tested only for . resulting from the 3-opt procedure. The test instances were taken from the TSPLIB (http://www. Each instance is described by its TSPLIB name and size. Phase 30 50 80 100 150 Optimum 21294 Kruskall 2-opt 3-opt Lin105 Kruskall 2-opt 3-opt Pr124 Kruskall 2-opt 3-opt Fl417 Kruskall 2-opt 3-opt 15408 13049 11902 15366 13206 11918 15598 13129 11895 15466 12833 11908 15611 12762 11907 84895 64804 59385 83279 64064 59076 84942 64061 59076 85851 65020 59030 90729 65407 59076 11861 21341 15361 14379 21112 15764 14379 21841 15561 14379 20950 15706 14379 22285 15949 14379 59030 29764 22965 21294 30330 23187 21484 29364 22937 21294 31355 24341 21362 30311 22834 21309 14379 Instance KroD100 (m. MARINAKIS. the number 4 instance in Table 1 is named Pr76 with size equal to 76 nodes. to be equal with the number of nodes.3. The process is repeated with the new tour using the edge with the largest cost from the updated heap.uni-heidelberg. The ﬁnal result of the algorithm is simply the best solution over all iterations. This procedure does not exclude a node of the current candidate for deletion edge to be the same as in a previous iteration. the ﬁrst column shows the name of the instance while the second one shows the number of nodes. m) and (n.iwr. Computational results The modiﬁed GRASP was implemented in Fortran 90 and was compiled using the Linux VAST/ f90 compiler (f902f77 translator) on a Pentium III at 667 MHz. n) and the addition of the edges (i.

Improvement of the solutions from the phases of the algorithm for different sizes of the RCL 58000 56000 54000 52000 50000 48000 "First iteration of GRASP" "Second iteration of GRASP" "Third iteration of GRASP" "Fourth iteration of GRASP" "Fifth iteration of GRASP" "Sixth iteration of GRASP" "Seventh iteration of GRASP" 23000 22000 21000 Cost of the Solution 20000 19000 18000 17000 16000 "First iteration of GRASP" "Second iteration of GRASP" "Third iteration of GRASP" "Fourth iteration of GRASP" "Fifth iteration of GRASP" "Sixth iteration of GRASP" "Seventh iteration of GRASP" Cost of the Solution 46000 44000 15000 0 20 40 60 80 100 120 140 160 180 200 14000 0 20 40 60 80 100 120 140 160 180 200 Inner iterations of GRASP Inner iterations of GRASP Figure 4.EXPANDING NEIGHBORHOOD GRASP 243 32000 30000 Cost of the Solution 28000 Cost of the Solution "30" "50" "80" "100" "150" "Optimum" 23000 22000 21000 20000 19000 18000 17000 16000 "30" "50" "80" "100" "150" "Optimum" 26000 24000 22000 15000 20000 1 2 Phases of the Algorithm 3 14000 1 2 Phases of the Algorithm 3 95000 90000 85000 80000 75000 70000 65000 60000 55000 "30" "50" "80" "100" "150" "Optimum" 16000 15500 15000 Cost of the Solution 14500 14000 13500 13000 12500 12000 1 2 Phases of the Algorithm 3 11500 "30" "50" "80" "100" "150" "Optimum" Cost of the Solution 1 2 Phases of the Algorithm 3 Figure 3. Improvement of the solutions inside the phases of the algorithm .

Comparison of results for different sizes of the RCL. Best known solution (TSPLIB) 426 7542 538 108159 1211 21282 22141 20749 21294 22068 7910 629 14379 44303 59030 118282 6110 96772 58537 6528 26524 73682 2323 15780 29368 29437 126643 80369 2378 49135 2579 48191 15281 11861 107217 50778 35002 Instance Eil51 Berlin52 Eil76 Pr76 Rat99 KroA100 KroB100 KroC100 KroD100 KroE100 Rd100 Eil101 Lin105 Pr107 Pr124 Bier127 Ch130 Pr136 Pr144 Ch150 KroA150 Pr152 Rat195 D198 KroA200 KroB200 Ts225 Pr226 Gil262 Pr264 A280 Pr299 Rd400 Fl417 Pr439 Pcb442 D493 Nodes 51 52 76 76 99 100 100 100 100 100 100 101 105 107 124 127 130 136 144 150 150 152 195 198 200 200 225 226 262 264 280 299 400 417 439 442 493 30 426 7542 543 108159 1219 21282 22141 20749 21294 22068 7910 634 14379 44303 59385 120459 6172 98004 58537 6528 26704 73818 2359 15868 29798 29482 126643 81062 2424 49811 2649 49073 15613 11902 111606 51490 36214 50 426 7542 543 108159 1227 21282 22141 20852 21484 22068 7911 632 14379 44347 59076 120162 6174 96772 58537 6555 26693 73662 2357 15883 29708 29654 126962 81000 2440 50124 2640 49016 15579 11918 111398 51833 36166 80 426 7542 540 108159 1211 21282 22141 20749 21294 22068 7911 637 14379 44303 59076 118326 6170 97089 58537 6565 26618 73818 2363 15866 29637 29732 126643 80414 2385 49135 2632 48235 15385 11895 107401 51815 36137 100 428 7542 542 108159 1218 21282 22141 20749 21362 22068 7910 629 14379 44303 59076 120338 6110 97537 58537 6570 26524 73818 2331 15788 29855 29878 126643 81005 2448 49923 2589 48911 15701 11908 109800 50946 36481 150 426 7542 538 108159 1211 21282 22141 20769 21309 22068 7911 631 14379 44303 59030 119821 6131 97914 58537 6565 26585 73818 2351 15880 29380 29789 126643 81289 2429 49948 2645 48954 15559 11907 111300 51497 35253 .244 MARINAKIS. MIGDALAS AND PARDALOS Table 4.

the improvement varies from 6. More precisely. the sixth column indicates the average improvement obtained by the use of the 2-opt.e.3%. the ﬁfth the solution by the 3-opt. The quality of this local optimum is 0.8%. The quality of these local optima varies from 0. namely kroD100. 37. For the instances where the improvement is about 20%. for nineteen of them a solution was found with quality between 0. This difference is due to the quality of the initial solution. except for the instance pr2392 where the quality is 2. a better solution was found when the size of RCL was 80. and the ﬁfth column shows the quality of the solution produced by GRASP. For the 51 instances for which the algorithm was tested. ﬂ417.e. while the last column shows the average improvement obtained by the use of the 3-opt. The quality of the initial solution is 100(cKruskall −copt ) computed as p = . the quality of the initial solution varies between 30 and 40%. i. except for the instances d1291 and ﬂ1577 where the algorithm runs for 20 iterations. These results are from the GRASP iteration that gives the best solution for each instance. the third column shows the solution produced by Kruskall’s algorithm. the best solution was found for twenty three of them.09 and 1. the fourth column shows the best solution taken from the TSPLIB. it is observed that. the algorithm didn’t ﬁnd the optimum and produced different local optima for each size of the RCL. the improvement is based on the quality of the solution 100(c2opt −copt ) obtained by the 2-opt algorithm which is computed as p = .90%. and copt copt is the cost of the best known solution. For all other RCL sizes the results are quite similar to each other and . In the ﬁrst. The improvement of the solution obtained by using 3-opt is about 10%. for all instances. has reached the best known solution. pr124. Table 4 presents the best results for different sizes of the RCL. From this table. the fourth the solution by the 2-opt. This improvement varies from 2.EXPANDING NEIGHBORHOOD GRASP 245 10 iterations. i.e.28%. the same local optimum was found for all the RCLs. where cKruskall denotes the cost of the solution found by copt the ﬁrst phase of the algorithm. It can be seen from Table 1 that the algorithm.09%.4%. which for the Eil51 is 10. where cgrasp denotes the cost of the best solution found by GRASP. namely 30 and 80.07%. In Table 1. The sequence of copt improvements can be seen from ﬁgure 4. For the instances with the number of nodes between 280 and 2392 the quality of the solution is between 0. The quality of the produced GRASP solutions is given in terms of the relative deviation from the best known solution.37% in Rl1304. that is 100(cgrasp −copt ) p= . and only one of them the quality of the produced solution is larger than 2. the cost of the optimum solution is 21294 and the algorithm were able to ﬁnd it for two different RCL sizes.03 and 1. 15.00%. while for the D2103 is 77. For the fourth instance.47% for a list with size equal to 50. In Table 2. to 0. The instances for which these results are given are presented in Table 3. except for the RCL with size equal to 30. In the third instance. in most instances with the number of nodes up to 264. i.73% in Eil51 to 16. in most cases.11%.65% in D2103.60%. The four chosen instances have different features with respect to their solutions. in 45% of all cases. For the second instance. From Table 2.60% in Eil51 to 38. Again. it is noted that in almost all instances the improvement obtained by applying 2-opt to the solution produced by Kruskall is about 20%. the algorithm were able to ﬁnd the optimum for all ﬁve RCLs. the third column shows the best solution found by the modiﬁed GRASP. for a list with size equal to 80.60%. for eight of them a solution was found with quality between 1.03 and 0. lin105.

1 0.4 1.4 93.6 95.1 3.6 2.6 96.2 3.9 4.5 1.2 98.16 0.1 95.17 0.8 94 93.6 2.8 93.3 96.9 4.2 0.4 1.1 0.12 0.13 0.4 1.2 1.4 5.3 0.64 99.6 0.4 98.8 4.8 1.5 94.3 0.4 2.1 93 95.23 0.2 1 0.12 0.1 1.3 96.8 2.4 95. MARINAKIS.246 Table 5.3 95.2 1.1 96.4 4.19 0.14 0.3 1.4 0.7 1.9 99.2 93.4 3.3 0.2 0.1 1 1.63 99.8 .5 92.6 0.6 3.4 1.5 96.8 94.8 94.8 1 0.2 0.3 96.6 4.5 3.2 2.3 3.23 0.63 99.8 4.2 0.4 1.2 1.9 0.4 97.6 0.1 98.9 2.6 1.8 4.3 95 95.13 0.10 2-opt (%) 5 5 1.5 1.4 0. MIGDALAS AND PARDALOS Relative running times of the phases of the GRASP.64 99.2 3. Instance Eil51 Berlin52 Eil76 Pr76 Rat99 KroA100 KroB100 KroC100 KroD100 KroE100 Rd100 Eil101 Lin105 Pr107 Pr124 Bier127 Ch130 Pr136 Pr144 Ch150 KroA150 Pr152 Rat195 D198 KroA200 KroB200 Ts225 Pr226 Gil262 Pr264 A280 Pr299 Rd400 Fl417 Pr439 Pcb442 D493 Rat575 P654 D657 Rat783 Pr1002 Pcb1173 D1291 Rl1304 Rl1323 Fl1400 Fl1577 Rl1889 D2103 Pr2392 Nodes 51 52 76 76 99 100 100 100 100 100 100 101 105 107 124 127 130 136 144 150 150 152 195 198 200 200 225 226 262 264 280 299 400 417 439 442 493 575 654 657 783 1002 1173 1291 1304 1323 1400 1577 1889 2103 2392 Kruskall (%) 5 5 4.3 0.7 4.7 93.6 99 99.9 0.3 2.7 0.4 1.3 1.5 99.6 2.7 2.7 95.2 3.5 1.2 1.9 97.2 93.5 3.5 0.4 1.72 99.3 2.3 98.10 3-opt (%) 90 90 93.1 4 4.4 0.2 95.3 3.2 97.1 1.3 99 98.4 0.6 1.74 99.65 99.8 0.6 1.18 0.4 99.15 0.8 2.1 1.

01 6.51 0.61 3.90 0.37 7.39 10.36 1.74 0.54 0.65 1.07 Junger 3-opt 0. In Table 6.03 .47 1.41 2. The results obtained by GRASP are also compared with two different set of results.62 3. while 101 . In ﬁgure 4 the improvement of the solution in each of the seven ﬁrst iteration of the GRASP for two instances.74 9.39 0.99 correspond to the iterations of the 2-opt algorithm.46 5.63 0.55 2.00 0.46 3.01 1.00 1. In Table 5. and those presented in the DIMACS Implementation Challenge which occurred in 2000/2001.14 3.42 3.94 Junger LK2 0. those presented in [20].29 0. it can be observed that when the numbers of nodes is less than 127 the relative time for Kruskall is around 4. 0 corresponds to the ﬁrst phase (Kruskall’s algorithm) of GRASP.68 2.17 10.01 6.00 0.73 1.42 3. Table 6 compares the proposed GRASP to the results of this ﬁrst set.33 0.58 10.95 1.34 2.46 0.15 0.37 3. Lin105 and Pr107.05 1. Junger 2-opt 8.52 4.77 1.15 6.23 2.14 0. Comparison between GRASP and the approaches by Junger et al.71 3.01 3.52 8.81 0.8% when the number of nodes is 2392.39 1.2% and for 3-opt is around 93.34 0.56 0.85 13. For both instances the optimum solution was found after the fortieth iteration of the 3-opt algorithm. for 2-opt is around 2. reaching 99.72 3. The x-axis shows the number of inner iterations that is. As the number of nodes increased the relative time for 3-opt is increasing.00 0.44 0.38 0.49 0.200 correspond to the iterations of the 3-opt algorithm.00 0.00 0.68 1.19 1.EXPANDING NEIGHBORHOOD GRASP 247 quite close to the best solution.17 0.03 0.05 0.53 2.00 0.00 0.55 1.55 0.12 1.71 1.57 9.3%.06 0.55 0.00 0.32 0.62 4.75 0.38 0.55 0.26 1.77 Junger ILK 0.22 Junger LK1 0.93 14.00 0.09 0.47 0.18 1. fourth and ﬁfth columns show the relative times of the three phases with respect to the total time.07 6.89 7. the third.04 4.00 0.48 4.05 0.16 0.93 6. is presented.84 1. 1 .03 0.85 3.49 2.91 Instance Lin105 Pr107 Pr124 Pr136 Pr144 Pr152 Rat195 D198 Pr226 Gil262 Pr264 Pr299 Rd400 Pr439 Pcb442 D493 Rat575 P654 D657 Rat783 Nodes 105 107 124 136 144 152 195 198 226 262 264 299 400 439 442 493 575 654 657 783 GRASP 0. The ﬁrst one considers only instances with a number of nodes less than 1000. This is due to the fact that the full 3-opt version is used.79 2.00 0.10 1.72 1.00 0.32 4. From this table.72 0.12 1.00 0.00 2.41 2.94 1.74 2.68 0.18 0.00 0.5%.

) .248 Table 7. Method Optimal GRASP MARINAKIS. Pr1002 259045 262060 Pcb1173 56892 57676 d1291 50801 51616 rl1304 252948 255185 rl1323 270199 273115 ﬂ1400 20127 20310 ﬂ1577 22249 22427 rl1889 316536 319250 d2103 80450 81312 pr2392 378032 386017 Tour Construction Algorithms Strip BW-Strip Spaceﬁl FRP 394289 394289 364559 425046 73779 73779 80189 91670 553358 395731 413582 480826 546842 397495 438287 496323 643133 518485 517608 589213 556394 556394 545731 608752 Classic Tour Construction Algorithms B-Greedy C-Greedy Boruvka Q-Boruvka NN CHCI RI FI CW CCA HKChrist 311124 309389 302666 311188 312691 295940 293925 286683 288356 284609 276255 67220 68135 68350 65321 72969 67059 66310 65501 63404 63668 59901 53780 61979 61356 61517 57697 63753 301783 306481 294245 300826 325331 302853 302295 310246 280769 273796 271186 309811 313306 315845 327940 345973 323994 325854 326644 297899 294950 287734 22223 23607 24847 24175 24617 26527 26771 26327 25522 26235 27598 27445 371663 378846 360741 383198 385967 375611 374435 371709 352748 354690 339415 83307 91290 88915 91238 88187 92399 454371 454439 449044 456121 469774 443643 440108 427610 423945 421011 401404 Simple Local Search Algorithms (2-Opt. MIGDALAS AND PARDALOS Solution from all the algorithms used from the comparisons in the ten chosen instances. etc.5opt-B 3opt-J 3opt-B H2 H3 H4 GENI GENIUS 273632 274280 299708 273782 267444 269682 273280 269039 267594 284517 271919 60438 60422 66338 59492 58650 59176 59451 58824 59011 61673 58407 54310 53611 53328 52957 57114 55604 266203 270501 292767 267903 261382 262126 269231 265604 266921 277605 273138 280892 285347 311702 286058 275426 276243 281864 279034 276877 286638 282055 21925 20830 20555 24260 24055 23983 20795 23507 23326 25216 20904 24631 333680 343755 359136 336762 329249 330682 341443 334823 330116 333759 328470 82601 81755 82088 82738 88150 83563 403579 407396 444207 405106 390673 390709 400727 398906 394614 411325 394431 Lin-Kernighan Implementations and Variants CLK ACRLK LKHKCh LK-J 266313 265552 262323 263615 58110 59781 57657 57758 53418 52787 51708 52051 263065 264124 256305 256116 284992 279139 272701 273245 20974 20755 20503 20362 24254 24899 22400 22457 322868 329179 319873 320364 81314 81871 80835 81819 390510 389600 382869 384124 (Continue on next page.) 2opt-J 2opt-B 2opt-C 2. 3-Opt.

the GRASP results are better in all cases except for ﬁve instances where the differences in the quality of the solutions are between 0. The Iterated Lin Kernighan is considered as the most cost effective adaptation for the solution of the Traveling Salesman Problem according to Johnson in [17]. . Method LK-N LK-NYYY SCE ALK H-LK (Continued). The ﬁfth and sixth column of the table present the results obtained by two versions of the Lin Kernighan heuristic as they are described in [20]. respectively. As for the ﬁfth method (ILK).05 to 0. The ﬁnal column shows the quality of the results obtained by the proposed GRASP algorithm. For all instances the results from the proposed GRASP are better than those from the ﬁrst four methods (including the two versions of the Lin Kernighan). Pr1002 262468 263396 262288 259045 259441 Pcb1173 58012 57504 57700 56893 56995 d1291 52175 53092 51604 50861 51267 rl1304 256613 255987 258038 256202 254415 rl1323 273768 272753 274580 271346 270513 ﬂ1400 20367 20307 20382 20262 20347 23487 ﬂ1577 22879 22972 22746 rl1889 320359 321149 324059 317578 318556 80620 d2103 82215 81245 81885 249 pr2392 385073 382005 383622 380051 378923 Repeated Local Search Algorithms CCLK ACRCLK ILK-J I3opt ILK-N ILK-NYYY M-LK K-MLK VNS-2H VNS-3H 259952 259657 259045 261298 261014 259045 259734 259045 261758 261913 57198 56968 56897 57032 57637 56892 57126 56992 57961 57535 51828 51118 50868 50912 51009 50801 51295 50886 51429 51812 255270 253361 252948 252948 254557 252948 255390 252948 254159 256010 271074 270763 270226 270484 270774 270226 271468 270199 272383 271736 20260 20129 20127 20169 20279 20127 20376 20164 20404 20175 22873 22854 22254 22274 22266 22249 22327 22263 23197 22494 318094 318296 317134 318719 317712 316562 320948 316540 327304 323158 80610 80665 80450 80944 81347 80495 81253 80487 81642 81419 379917 380878 378226 382058 380956 378051 379376 378032 392948 386662 Algorithms that Use Chained LK as a Subroutine BSDP TourMerge 260293 259045 57054 56892 51072 50801 255370 252948 271636 270226 20260 20127 23817 22249 318032 316562 81737 80466 379987 378032 Metaheuristics TS22 TS2DB TSLKLK TSLKDB TSSCSC TSSCDB 269765 268743 261323 261469 260683 261277 59942 59187 57290 57627 57546 57527 53196 52782 51764 51149 50956 50983 271259 264218 254542 255890 254698 256020 281905 278973 272387 273018 272195 270877 21600 20379 21416 20296 20720 20185 23096 22848 24205 23226 24864 23285 332538 326583 318265 317666 319959 318805 81657 81360 81007 81034 80849 80665 143688 391195 380486 381715 380066 380309 the third and fourth columns of this table the quality of the results of the 2-opt and the 3-opt algorithm are presented.EXPANDING NEIGHBORHOOD GRASP Table 7.62%. while in the seventh column the results obtained by the Iterated Lin Kernighan are presented [20].

18 18.00 5.88 1.47 19.97 5.50 3.804 (Continue on next page. Quality of each of the solutions and average quality from all the algorithms used from the comparisons in the ten chosen instances.80 0.17 1.52 9.01 19.05 1.41 21.65 11.33 1.08 29.49 2.18 63.44 12.47 7.559 4.09 47.52 13.24 4.41 1.74 5.96 5.) 2opt-J 2opt-B 2opt-C 2.85 20.70 2.83 4.39 8.14 5.97 6.34 3.02 4.67 1.97 13.93 1.49 10.56 5.565 49. Method GRASP Pr1002 Pcb1173 d1291 rl1304 rl1323 ﬂ1400 ﬂ1577 rl1889 d2103 pr2392 Average 1.23 6.15 3.73 19.68 40.113 6.88 15.14 14.15 11.90 0.73 64.52 4.45 15.12 1.38 56.46 2.37 31.08 1.218 4.89 10.99 1.61 1.16 6.91 5.21 18.89 5.32 5.13 11.24 7.68 10.10 19.68 0.78 20.93 3.95 20.685 Lin-Kernighan Implementations and Variants CLK ACRLK LKHKCh LK-J LK-N 2.27 17.62 2.00 4.635 6.21 83.00 5.70 5.85 17.71 5.96 17.77 0.97 4.31 0.92 24.21 40.31 21.34 3.91 20.52 1.30 9.87 6.60 13.60 20.32 9.89 22.25 1.24 6.62 21.21 52.52 86.04 2.66 24.19 9.71 14.36 Classic Tour Construction Algorithms B-Greedy C-Greedy Boruvka Q-Boruvka NN CHCI RI FI CW CCA HKChrist 20.34 1.202 1.45 63.12 7.87 6.51 7.26 10.87 16. MIGDALAS AND PARDALOS Table 8.14 47.91 5.66 6.78 21.44 3.60 4.63 6.57 25.84 9.42 13.36 15.76 20.55 9.05 4.39 4.64 18.42 13.33 3.07 2.48 1.16 3.11 12.36 61.86 3.405 47.27 2.72 8.47 6.250 MARINAKIS. 3-Opt.25 9.93 28.64 11.37 1.545 74.76 1.77 17.94 15.62 19.50 90.43 9.24 4.84 20.18 44.24 13.36 16.5 3.27 1.91 1.5opt-B 3opt-J 3opt-B H2 H3 H4 GENI GENIUS 5.00 3.67 11.6 24.81 4.93 2.86 3.07 1.06 18.43 16.28 1.83 2.70 4.32 3.06 1.76 19.86 10.03 75.26 17.308 1.66 23.60 0.23 17.63 5.83 14.91 0.77 2.93 2.21 1.34 6.13 118.13 20.40 2.19 20.05 7.13 1.496 Simple Local Search Algorithms (2-Opt.01 11.78 4.35 17.24 12.33 18.31 21.21 14.43 6.95 61.04 33.46 6.15 19.35 6.87 1.29 17.45 11.16 16.76 102.181 Tour Construction Algorithms Strip BW-Strip Spaceﬁl FRP 52.23 3.61 15.98 3.87 5.32 2.69 3.50 3.11 1.76 7.00 20.68 29.53 4.11 16.40 3.30 3.13 9.21 3.71 17.39 8.11 62.10 18.19 3.08 4.91 3.01 4.57 3.) .16 1.32 4.87 1.09 13.29 5.81 2.04 8. etc.33 14.51 22.09 4.85 1.50 19.66 18.04 23.41 6.37 6.42 1.65 13.44 5.57 3.43 11.11 5.07 0.20 16.82 28.916 4.79 3.21 1.18 47.42 8.51 1.69 103.47 3.45 5.708 18.79 2.80 28.86 22.55 15.80 63.00 8.062 8.93 14.46 10.93 3.45 5.47 16.29 5.75 7.

11 2.29 0.31 0.21 0.53 0.12 0.00 1.292 Algorithms that Use Chained LK as a Subroutine BSDP TourMerge 0.94 0. In Tables 7 and 8.00 3.00 0.08 1.50 1.46 0.36 4.743 0.42 0.00 0.15 1.27 0.03 0.71 3.25 1.034 The proposed GRASP results are also compared with the results presented in the DIMACS Implementation Challenge (http://www.00 0.12 0.00 1.21 4.88 0.308 0.35 0.70 1.66 0.413 0.36 0. This challenge is probably the most extensive examination made to date of heuristic algorithms in the ﬁeld of TSP. the results for ten instances.52 0.13 2.18 4.041 0.05 1.64 0.41 0.88 0.88 1.72 0.)).00 0.01 0.05 1.00 0.11 0.39 11.86 5. algorithms that Use Chained LK as a Subroutine and Metaheuristics (Tabu Search.32 1.24 4.81 1.72 1. with number of nodes between 1002 and 2392.42 0.12 4.65 0.38 0. etc.74 0.24 1.00 0.97 0.23 1.05 1.17 0. 3-Opt.36 7.25 6.02 0.14 3.58 0.55 0. are presented.46 2.48 0. Method LK-NYYY SCE ALK H-LK (Continued).24 0.89 1.2 1.567 0.00 0.21 0.69 0.36 1.09 5.95 1.21 0.48 0.11 0.97 0.01 0.21 0.00 0.02 0.EXPANDING NEIGHBORHOOD GRASP Table 8.48 1.16 0.00 1.00 3.00 0.969 Repeated Local Search Algorithms CCLK ACRCLK ILK-J I3opt ILK-N ILK-NYYY M-LK K-MLK VNS-2H VNS-3H 0.84 2.00 7.00 0.00 0.00 0.17 1.13 0.92 1.642 0.40 0.27 0. .01 0.83 3.75 4.08 0.33 3.69 0.67 1.69 0.66 5.00 0.28 0.01 1.90 1.60 4.241 2.54 0. Repeated Local Search Algorithms.48 0.27 0.74 0.13 0. Computational results for approximately 40 tour construction heuristics (divided in three categories.95 0.00 0.92 0.16 0.25 2.57 0.47 0.95 2.76 0.32 0.993 1.05 1.034 1.09 0.24 1.11 0.76 0.49 0.66 0.54 0.20 0.20 0.63 1.51 1. att.00 0.25 7.99 0.77 0. 251 Pr1002 Pcb1173 d1291 rl1304 rl1323 ﬂ1400 ﬂ1577 rl1889 d2103 pr2392 Average 1.04 0.24 0.01 1.35 3.08 0.research.29 1.19 0.50 0.68 1.00 0.50 0.33 0.58 0.47 0.80 2.11 0.).96 0.26 1.27 4.31 0.79 4.01 1.01 0.61 1.com/ dsj/chtsp/).60 0.75 0.25 0.28 0.00 0.10 0.22 0.009 0.53 0.73 0.01 0.41 0.05 0.56 0.00 0.56 3.706 1.06 1.18 1.25 0.00 0.37 0.97 0. Classic Tour Construction Algorithms with quality >15% above the Held and Karp Bound and Classic Tour Construction Algorithms with quality <15% of the Held and Karp Bound) and for approximately 80 tour construction heuristics divided into 5 categories (namely.13 0.53 0.38 0.63 0.24 2.64 0.702 0. Simple Local Search Algorithms (2-Opt.81 2.29 3.06 4.48 1.39 0.81 0. Etc.40 2.62 0.06 0.004 Metaheuristics TS22 TS2DB TSLKLK TSLKDB TSSCSC TSSCDB 4.00 0.21 0.35 0.42 0.02 0.69 8. Lin-Kernighan Implementations and Variants.62 0.01 0.20 2.15 1.69 1.06 3. namely.18 1.78 1. Tour Construction Algorithms that Emphasize Speed over Quality.99 1.064 1.90 0.87 0.

041 0.916 4.) .Kernighan by Johnson Keld Helsgaun’s Multi-Trial Variant on Lin-Kernighan Iterated-3-Opt by Johnson Augmented-LK Applegate-Cook-Rohe Chained Lin–Kernighan Iterated Lin–Kernighan by Neto Multi-Level Lin-Kernighan Implementations Concorde Chain Lin-Kernighan Helsgaun Lin-Kernighan Tabu-search-SC-DB Expanding Neighborhood Search GRASP Lin-Kernighan-with-HK-Christo-starts Tabu-search-LK-DB Variable-Neighborhood-Search-Using-3-Hyperopt Balas-Simonetti-Dynamic-Programming-Heuristic Johnson Lin–Kernighan Stem–Cycle Ejection Chain LK-NYYY Neto Lin–Kernighan Variable-Neighborhood-Search-Using-2-Hyperopt Tabu-search-Stem .062 4.241 1.804 1.413 0.969 1.181 1.567 0.11 3.034 2.034 1.2 3.113 6.5 3.5opt-B 2opt-J 2-Hyperopt Held Karp Christoﬁdes Method Average 0.993 2.41 1.42 0.702 1. MIGDALAS AND PARDALOS Ranking of the algorithms based on the average quality. Rank 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 Tour Merging ILK-NYYY Iterated Lin .559 4.202 1.642 0.218 6.004 0.635 3.496 (Continue on next page.Cycle -SC Tabu-search-LK-LK Tabu-search-2opt-Double Bridge 3opt-J 3opt-B Concorde-Lin-Kernighan 4-Hyperopt Applegate Lin Kernighan 3-Hyperopt GENIUS Tabu-search-2opt-2opt 2.292 1.308 1.685 4.064 0.93 6.706 1. MARINAKIS.009 0.743 0.88 0.308 4.83 5.252 Table 9.

83 8. the ﬁelds are left empty.) Rank 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 Method 2opt-B GENI CCA Clarke–Wright 2opt-C FI RI Boruvka CHCI C-Greedy B-Greedy Q-Boruvka NN Spaceﬁlling Best–Way Strip FRP Strip Average 6. the ﬁnal column in Table 8 shows the average quality (over different runs) for these algorithms.76 17.545 49.research. As expected. the proposed GRASP ranks better than all tour construction heuristics. For the cases where the results are not given in the web page.43 11.565 253 The results are taken from the web page of the DIMACS Implementation Challenge (http://www. where the proposed GRASP is ranked thirteen among 54 algorithms.att. it is possible to modify GRASP in order to take advantage of this fact. It is known [18] that all metaheuristic–codes for STSP are dominated by 3-opt. In both tables. It also ranks better than all simple local search algorithms with exception of two Lin–Kernighan Implementations.47 17.56 16.05 10. as we mention in Section 4.708 19.6 24. The proposed GRASP gives slightly inferior results when compared to algorithms based on Iterated or Chained Lin Kernighan. we have included results for tour construction algorithms although these approaches are not competitive with methods which use a single or a more complicated local search phase. (Continued.85 47.EXPANDING NEIGHBORHOOD GRASP Table 9.26 14. the ﬁrst column gives abbreviated names for the algorithm. However.64 17.96 18.405 74. It ranks better than all Variable Neighborhood Search implementations and it also ranks better than all metaheuristics but one Tabu implementation. . Lin-Kernighan and Iterated Lin–Kernighan. Since different runs of some algorithms may result in different solutions.23 18. In Table 10 those abbreviations are explained and references to the literature are given. the best results for each such approach are presented in Tables 7 and 8.36 75.com/ dsj/ chtsp/). This is used in order to rank the approaches in increasing order of average quality in Table 10. On the other hand. For completeness.

Cheapest Insertion.and y-direction strips. don’t-look bits [3] (Continue on next page. Abbrev Strip BW-Strip Spaceﬁl FRP B-Greedy C-Greedy Boruvka Q-Boruvka NN CHCI RI FI CW CCA HKChrist 2opt-J 2opt-B 2opt-C 2.5opt-B 3opt-J 3opt-B H2 H3 H4 GENI GENIUS CLK ACRLK Short Description of heuristic and Reference Paper Strip algorithm with initial sorting in x direction [18] Best-Way Strip algorithm. best of x.) .5-Opt:Bentley kd-tree based implementation [5] 3opt-JM-20-quadrant-neighbors[MIPS] with don’t-look bits by Johnson [17] 3-Opt:Bentley kd-tree based implementation [5] 2-Hyperopt [17] 3-Hyperopt [17] 4-Hyperopt [17] GENI(p=10) [10] GENIUS(p = 10) [10] Concorde-Lin-Kernighan [3] Applegate—Cook—Rohe Lin-Kernighan with 12 quadrant neighbors. radix sort [18] Spaceﬁlling-Curve algorithm of Bartholdi and Platzman [18] Fast-Recursive-Partitioning-Jon Bentley’s kd-tree based implementation [5] Benchmark Greedy [18] Greedy:Concorde implementation [3] Boruvka: Concorde’s implementation [3] Quick-Boruvka—Concorde’s implementation [3] Nearest-Neighbor—Concorde’s implementation [3] Convex-Hull-Cheapest-Insertion (kd-tree based implementation) [5] Random-Insertion—Jon Bentley’s kd-tree based implementation [5] Farthest-Insertion—Jon Bentley’s kd-tree based implementation [5] Clarke-Wright-Savings Algorithm [6] Golden-Stewart Convex Hull. MIGDALAS AND PARDALOS Table 10. Angle Selection Algorithm [18] HeldKarp-Based-Christoﬁdes with Greedy Shortcuts [18] 2opt-JM-40-quadrant-neighbors with don’t-look bits by Johnson [17] 2-Opt:Bentley kd-tree based implementation [5] Concorde-2opt [3] 2.254 MARINAKIS. Abbrevated names used in the tables and explanation of them.

Conclusions In this paper. 40 quadrant neighbors [27] Nguyen-Yoshihara-Yamamori-Yasunaga Lin-Kernighan [18] Chris Walshaw’s Multi-Level Lin-Kernighan Implementations [34] Keld Helsgaun’s Multi-Trial Variant on Lin-Kernighan [14] Variable-Neighborhood-Search-Using-2-Hyperopt [18] Variable-Neighborhood-Search-Using-3-Hyperopt [18] Balas-Simonetti-Dynamic-Programming-Heuristic [18] Tour-merging (using a branch decomposition) of 10 Chained Lin-Kernighan tours [3] Tabu-search-2opt-2opt [35] Tabu-search-2opt-Double Bridge [35] Tabu-search-LK-LK [35] Tabu-search-LK-DB [35] Tabu-search-Stem .EXPANDING NEIGHBORHOOD GRASP Table 10. The results obtained using this algorithm were very satisfactory and in many cases (45% of the tested instances) equal to the best known solutions. (Continued.Cycle -SC [35] Tabu-search-SC-DB [35] 255 4. a new algorithm for the solution of the traveling salesman problem was presented. would be possible to achieve by employing a different algorithm in the construction . it was very close to it.) Abbrev LKHKCh LK-J LK-N LK-NYYY SCE ALK H-LK CCLK ACRCLK ILK-J I3opt ILK-N ILK-NYYY M-LK K-MLK VNS-2H VNS-3H BSDP TourMerge TS22 TS2DB TSLKLK TSLKDB TSSCSC TSSCDB Short Description of heuristic and Reference Paper Lin-Kernighan-with-HK-Christoﬁdes-starts [18] Lin-Kernighan-JM-40-quadrant-neighbors with don’t-look bits by Johnson [17] Lin-Kernighan-Neto-20-quadrant-neighbors [27] Nguyen-Yoshihara-Yamamori-Yasunaga Lin-Kernighan Variant [18] Stem and Cycle Ejection Chain Algorithm [18] Augmented LK [18] Helsgaun’s Implementation of a powerful Lin-Kernighan Variant [14] Concorde-Chain Lin Kernighan [3] Applegate-Cook-Rohe Chained Lin-Kernighan [3] Iterated Lin–Kernighan by Johnson [17] Iterated-3-Opt by Johnson [17] ILK-Neto-(N/10)-iterations with cluster compensation. The algorithm is a modiﬁcation of the well known Greedy Randomized Adaptive Search Procedure. Further improvement of the GRASP approach both with respect to solution quality and running times. For instances where the solution obtained is not equal to the optimal one.

9. 126.K.M. E. “Scheduling of vehicles from a central depot to a number of delivery points”. However. A. MA.M. Feo and M. D. Wiley and Sons. Ribeiro and P. 1997.K. Bixby. P. Golden and W. 1985. J. and W. Applegate. G. C. 1964. 3.” in the Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization. Lenstra. and S. D.” European Journal of Operational Research. “Experimental Analysis of the STSP.A. “The traveling salesman problem: A case study.C. P.S. Aarts and J. McGeoch. Johnson and L. Lawer. B. 2002.and 3. Work on these issues is under development. A. G. Computing.S. McGeoch. Int. 10. 6. 269–294. A. 1–41. R. pp. Hansen (Eds. Punnen (Eds. Gutin and A. 471–528.). Wiley and Sons. D. The investigation has also obviated the need for the development of satisfactory termination criteria for the GRASP approach. Helsgaun. “Variable neighborhood search: Principles and applications. Punnen.). R.G. Holmqvist. Chvatal. 1086–1094. Storøy (Eds. As it was shown in the comparisons of Table 9 the algorithm was ranked in the thirteen place between all the algorithms used in the DIMACS Implementation Challenge. Kluwer Academic Publishers. pp.C. Migdalas. J. Kluwer Academic Publishers. G. P. Holmqvist. pp. 5. 2002. A. 2001. Hertz. A. 387–411. 369– 444. Perego. Bentley. V. J. Rinnoy Kan and D. Gutin and A. R. 449–467. and W. Kluwer Academic Publishers Dordrecht.256 MARINAKIS. 1997. 645–656. “On the solution of traveling salesman problem.M.L. vol. 1992. 40. 13.” Operations Research. Kluwer Academic Publishers Dordrecht. Cook. Pardalos. 12. “Chained Lin-Kernighan for large traveling salesman problems. “Empirical analysis of heuristics. Garﬁnkel and G.). Stewart. 207–249. vol. (to appear). 1972. Operations Research. D. and R.M. vol. pp. 6. Nemhauser. E. “Greedy randomized adaptive search procedure. vol. “New insertion and postoptimization procedures for the traveling salesman problem. Wiley and Sons. vol.G. Clarke and J.A. Local Search in Combinatorial Optimization. Bixby. vol. The Traveling Salesman Problem and Its Variations.” in Local Search in Combinatorial Optimization.” European Journal of Operational Research. “Parallel continuous non-convex optimization. 109–133. R. GRASP is ﬂexible enough to either incorporate the Iterated Lin Kernighan in its second phase or replace the 2. .). Mladenovic. T. V. 1992.K. and P. 7.” in Parallel Computing in Optimization. pp.” Documenta Mathematica: Proc. Gendreau. 2001. Applegate.B. The only approaches that show better performance than the proposed GRASP are those based on Iterated or Chained Lin Kernighan.” in the Traveling Salesman Problem and Its Variations. pp. Laporte. Aarts and J. 1995. 12.” ORSA J. Migdalas. 6. Storøy (Eds. vol. Hansen and N.A.L.H.C. vol. Pardalos.” in Parallel Computing in Optimization. Lenstra. 4.). and P. and S. Shmoys (Eds. Baralia. 106–130. Wiley and Sons. 8. Lenstra (Eds. P. MIGDALAS AND PARDALOS phase.G.” IEEE Transactions on Evolutionary Computation. Pardalos. pp. Kluwer Academic Publishers: Norwell. K. “A hybrid heuristic for the traveling salesman problem. pp. 17. 1997.opt methods by such an approach. 2001. 15. pp. “An effective implementation of the lin-Kernighan traveling salesman heuristic. E. pp. 14. Hildago. M. 3. 2000. and G.I.L. 16. 215–310. Integer Programming. Cook. pp. Chvatal.R. “Fast algorithms for geometric traveling salesman problems.” Informs Journal on Computing. “Parallelized heuristics for combinatorial search. K. 1998. A.” Journal of Global Optimization. for instance one based on Prim’s algorithm. Johnson and L. Migdalas. Pardalos.” in Essays and Surveys on Metaheuristics. and by better utilizing the concept of expanding neighborhood local search in the second phase. 4. Wright. pp. 1997. References 1. 2. K. Festa and M. Migdalas. pp. Resende. Mathematica. 130. 568–581.). 18. “GRASP: An annotated bibliography. 5. Resende.W. no. 11. Cogr.

Reinhelt. Wiley and Sons. 1995. 231–247. M. pp. 31. 205–236.” International Transactions in Operational Research.C. “A parallel GRASP implementation for the quadratic assignment problem. vol. Dam. Papadimitriou. M. Osman and J. Pardalos. 25. J. 32. simulated annealing.M. Lawer. 1995. 591–606. Modares. 20. 1996. Somhom. Philadelphia. A. and M. Rinnoy Kan.M.S.C. Shmoys.” in Networks Models. Elsevier Science B.G.).). pp.L. Laporte. Resende. vol.L. Migdalas. Pitsoulis. Y. 6.organizing neural network approach for multiple traveling salesman and vehicle routing problems. A. Rolim (Eds.” in Handbooks of Metaheuristics. Pitsoulis. “Heuristic solutions of vehicle routing problems in supply chain management. Shmoys (Eds. Pardalos.V. Ribeiro.” in Meta-heuristics: Theory and Applications. pp. The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization.C. Computational solutions for TSP Applications. 1995. C. 21.P. Zachariasen and M.” Society for Industrial and Applied Mathematics.H. Marinakis and A. 571–587. 33. Lawer. Neto.H. and M.B. Pardalos. D. T. Kluwer Academic Publishers Dordrecht. vol.” PhD Thesis. Reinhelt.H. “Data structures and network algorithms. pp. 2002. Mavridou. tabu search and GRASP. pp. 44. 23. Kluwer Academic Publishers Dordrecht. 1999. Rinnoy Kan and D. “Greedy randomized adaptive search procedures. A. “An effective heuristic algorithm for the traveling salesman problem.” in Combinatorial and Global Optimization. G. Lin.). Ferreira and J. “Computational complexity. (Eds. 26.W. E. 1992. 498–516. and D.” European Journal of Operational Research. Resende and C. vol. M. Pennsylvania.” in the Traveling Salesman Problem and Its Variations. J. 24. P. Punnen (Eds. 2002. Ferreira and J. Tarjan.” Operation Research. pp. 7. 28.M. 35. 22.State of the Art. 1999. vol. 30. Lenstra. Lin and B. A.EXPANDING NEIGHBORHOOD GRASP 257 19. “Computer solutions of the traveling salesman problem.G. 219– 249. Kluwer Academic Publishers Dordrecht. P. 225–330. Wiley and Sons. 1985. D. The Traveling Salesman Problem. .D. pp. G. Kelly (Eds. I.” Operations Research. Rolim (Eds. Handbooks in OR and MS. A. “Tabu search on the geometric traveling salesman problem. 21.” in the Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization. (to appear). Lenstra. 317–331. M.). G. A. Migdalas. 27. “A self .A. Junger. Walshaw.” Bell System Technical Journal. pp. “Local search and metaheuristics. R. Rinaldi. C.K.).). Rego and F.G. S.” in Solving Irregular Problems in Parallel—State of the Art. Kernighan.H. 34. pp. SpringerVerlag. 29. pp. 37–85. and G. 2245–2269. Glover and G. Burkard (Eds. “A multilevel approach to the traveling salesman problem. 59. Glover. S. Kluwer Academic Publishers: Boston. L. Kluwer Academic Publishers Dordrecht. and R. “Efﬁcient cluster compensation for Lin–Kernighan heuristics. 1983. F. P. Computer Science University of Toronto. World Scientiﬁc Publishing Co. 1985. 309–367.). Johnson and C. “The traveling salesman problem. E.G. “The traveling salesman problem: An overview of exact and approximate algorithms. L. and T. 2003.” in Solving Irregular Problems in Parallel . Kochenberger (Eds. 1965.C. S. 1994. G. Resende. Gutin and A. “Parallel search for combinatorial optimization: Genetic algorithms. Ball et al.B. Enwaka. 1973. pp.K.).

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue listening from where you left off, or restart the preview.

scribd