You are on page 1of 14

International Journal of Modern Physics C, Vol. 9, No.

1 (1998) 133146 c World Scientic Publishing Company

MICROCANONICAL OPTIMIZATION APPLIED TO THE TRAVELING SALESMAN PROBLEM

ALEXANDRE LINHARES Computao Aplicada e Automao, UFF ca ca 24210-240 Niteri, RJ, Brazil o E-mail : linhares@nucleo.inpe.br JOSE R. A. TORREAO Computao Aplicada e Automao, UFF ca ca 24210-240 Niteri, RJ, Brazil o E-mail : jrat@caa.u.br

Received 24 October 1997 Revised 17 December 1997 Optimization strategies based on simulated annealing and its variants have been extensively applied to the traveling salesman problem (TSP). Recently, there has appeared a new physics-based metaheuristic, called the microcanonical optimization algorithm (O), which does not resort to annealing, and which has proven a superior alternative to the annealing procedures in various applications. Here we present the rst performance evaluation of O as applied to the TSP. When compared to three annealing strategies (simulated annealing, microcanonical annealing and Tsallis annealing), and to a tabu search algorithm, the microcanonical optimization has yielded the best overall results for several instances of the euclidean TSP. This conrms O as a competitive approach for the solution of general combinatorial optimization problems. Keywords: Combinatorial Optimization; Microcanonical Ensemble; Simulated Annealing; Traveling Salesman Problem.

1. Introduction The traveling salesman problem (TSP) has been studied since the early days of scientic computation, and is now considered the benchmark in the eld of combinatorial optimization. The problem can be easily stated: given a set of cities, the goal is to nd a path of minimal cost, going through each city only once and returning to the starting point. In spite of its simple formulation, the TSP has been proven to be NP-Hard, meaning that there probably does not exist an algorithm which can exactly solve a general instance of the problem in plausible processing time. The best that can be expected is thus to nd approximate strategies of solution, called heuristics. If a heuristic is a general-purpose procedure which can be applied to a variety of problems, it is referred to as a metaheuristic.
133

134

A. Linhares & J. R. A. Torreo a

Among the metaheuristics employed for the TSP, optimization algorithms derived from statistical physics have received a great deal of attention.13 Simulated annealing, as introduced by Kirkpatrick et al.,4 was the rst such algorithm, and many variants of it have appeared, such as fast simulated annealing,5 microcanonical annealing,6 and Tsallis annealing.3 Recently, a new strategy has been proposed which is also based on principles of statistical physics, but which does not resort to annealing. It is called the microcanonical optimization algorithm (O), and has so far been employed, with remarkable success, in the context of visual processing,7,8 and for task allocation in distributed systems.9 Here, we present an analysis of O when applied to the TSP, comparing it to some annealing-based procedures (simulated annealing, microcanonical annealing and Tsallis annealing), and also to a tabu search algorithm.10 The results which we report show O to be a very competitive metaheuristic in this domain: when considering both execution time and solution quality, it yielded the best performance of all the evaluated algorithms. In the following section, we describe the microcanonical optimization algorithm. Next, we discuss some implementation details of the alternative metaheuristics considered. In Sec. 4, we present and analyze the results obtained in our work, concluding with our nal remarks in Sec. 5.

2. Microcanonical Optimization The microcanonical optimization algorithm consists of two procedures which are alternately applied: initialization and sampling. The initialization implements a local and optionally aggressive search of the solution space, in order to reach a local-minimum conguration. From there, the sampling phase proceeds, trying to free the solution from the local minimum, by taking it to another conguration of equivalent cost. One can picture the metaheuristic, once stuck in a local-minimum valley, as trying to evolve by going around the peaks in the solution space, instead of attempting to climb them, as in simulated annealing, for instance. This is done by resorting to the microcanonical simulation algorithm by Creutz,11 which generates samples of xed-energy congurations (see below). After the sampling phase, a new initialization is run and the algorithm thus proceeds, alternating between the two phases, until a stopping condition is reached. In what follows, we treat in greater detail the two phases of the microcanonical optimization. A pseudocode for the algorithm is given in Appendix A.

2.1. Initialization In the initialization, O performs a local search, starting from an arbitrary solution and proposing moves which are accepted only when leading to congurations of lower cost (lower energy, in physical terms). Optionally, an aggressive implementation of this phase can be chosen, meaning that the algorithm will always pick the best candidate in a subset of possible moves.

Microcanonical Optimization Applied to the Traveling Salesman Problem

135

In a non-aggressive implementation, the only free parameter of the initialization phase denes its stopping condition: since it cannot be rigorously established when a local minimum has been reached, it is necessary to dene a maximum number of rejected moves as the criterium for interrupting this phase. In the case of an aggressive implementation (which we chose), it is also necessary to dene the number of candidate moves to be considered in each initialization step (500, in our work). We also remark that, for the denition of the parameters to be employed in the sampling phase (see below), a list may be compiled, in the initialization, of those moves which have been rejected for leading to higher costs when compared to the current solution.

2.2. Sampling As already mentioned, in the sampling phase the O metaheuristic tries to free itself from the local minimum reached in the initialization, at the same time trying to remain close, in terms of cost, to the best solution so far obtained. It implements, for this purpose, a version of the Creutz algorithm, assuming an extra degree of freedom, called the demon, which generates small perturbations on the current solution. At each sampling iteration, random moves are proposed which are accepted only if the demon is capable of yielding or absorbing the cost dierence incurred. In O, the demon is dened by two parameters: its capacity, DMAX , and its initial value, DI . The sampling generates a sequence of states whose energy is conserved, except for small uctuations which are modeled by the demon. Calling ES the energy (cost) of the solution obtained in the initialization, and D and E the energy of the demon and of the solution, respectively, at a given instant in the sampling phase, we must have E + D = ES + DI = constant. Thus, in terms of the initial energy and the capacity of the demon, this phase generates solutions in the cost interval [ES DMAX + DI , ES + DI ]. DI and DMAX are, therefore, the main parameters to be considered in the implementation of the sampling. In the original formulation of the algorithm, such parameters were taken, at each sampling phase, as xed fractions of the nal cost obtained in the previous initialization.7 As one of the contributions of the present work, we have proposed an adaptive strategy for the determination of such parameters: taking the list of rejected movements compiled in the initialization phase (see above), we have sorted it in growing order of the cost jumps, choosing two of its lower entries as the values of demon capacity and initial energy. The idea is that such values will be representative of the hills found in the landscape of the region being searched in the solution space, and will thus be adequate for dening the magnitude of the perturbations required for the evolution of the current solution, in the sampling phase. In our implementations of O for the TSP, the initialization was executed until a count of 100n consecutively rejected moves was reached, where n was the number

136

A. Linhares & J. R. A. Torreo a

of cities in the problem. The values of DMAX and DI were both usually taken as equal to the 5th lowest entry in the list of rejected moves compiled in the initialization, except for a certain kind of city distribution which required a change in this prescription (see Sec. 4). The sampling phase was run for only 50 iterations, and the algorithm was made to stop when reaching a count of 1000 moves without improvement in the best solution encountered. 3. Alternative Strategies In our experiments, we compared the performance of O to those yielded by alternative strategies: simulated annealing, microcanonical annealing, Tsallis annealing and tabu search. Here we discuss some of the features of the implementation of such algorithms in our work. 3.1. Simulated annealing (SA) Simulated annealing, as proposed by Kirkpatrick et al.,4 consists in the iterated implementation of the Metropolis algorithm,12 for a sequence of decreasing temperatures. The Metropolis algorithm is a computational procedure, long known in statistical physics, which generates samples of the states of a physical system at a xed temperature. Since such a system obeys the Gibbs distribution, the states generated at low temperatures will be low energy states. 13,14 Identifying the energy of the system with the cost function in an optimization problem, Kirkpatrick et al. proposed the following optimization strategy: Starting from an arbitrary solution, and a high temperature, the Metropolis algorithm is implemented, which means that moves are proposed which are accepted with probability p = min(1, exp (E/T )), where E is the cost variation incurred, and T is the current temperature. After a large number of iterations, the value of T is decreased, and the process is repeated until T 0. The initial value and rate of decrease of the temperature (which has no physical meaning in the optimization, being just a global control parameter of the process) constitute the annealing schedule of the algorithm. In our implementations, we followed the prescriptions by Cerny,1 taking the temperature to decrease by 7% of its value at each annealing step, and keeping it constant for 10n accepted moves or 100n rejected moves, whichever came rst, with n being the number of cities in the problem.15 The initial temperature was empirically determined: 100 trial moves, starting from the initial random solution, were analyzed, and the initial temperature was chosen greater than the maximum cost variation observed. 3.2. Tsallis annealing This corresponds to a variant of simulated annealing, based on the statistics proposed by C. Tsallis.16 Here, the acceptance probability of the Metropolis algorithm is generalized to p = min(1, [1 (1 q)E/T ]1/(1q)), such that SA is recovered in the limit of q 1. By appropriately choosing the value of q, it has been claimed3

Microcanonical Optimization Applied to the Traveling Salesman Problem

137

that this algorithm can produce plausible TSP solutions in fewer steps than with fast simulated annealing.5 In our implementations, we followed the general annealing prescriptions described above for SA. As for the parameter q, specic to the Tsallis annealing, it has been suggested that the algorithm improves, in what concerns execution times, as q decreases towards .3 Such general behavior was conrmed in our work, but, even though an exhaustive analysis has not been undertaken, we noticed a corresponding degradation in solution quality for q < 1. The value q = 1 was therefore employed in our experiments.

3.3. Microcanonical annealing (MA) This algorithm also corresponds to a variant of simulated annealing, now based on a simulation of the states of a physical system at xed energy, through the Creutz algorithm,6 instead of at xed temperature (The SA and Tsallis algorithms would thus correspond to canonical annealings). As originally proposed for visual processing applications, MA employed a lattice of demons, and was suited only for parallel implementations. In our single-demon sequential version, microcanonical annealing consists, basically, in the iterative application of the Creutz algorithm for progressively lower values of demon capacity. In our implementations, we took a demon of zero initial energy, such that, at the ith annealing step, states would be generated in the cost interval [E (i1) D(i) , E (i1) ], where D(i) represents the current demon capacity, and E (i1) represents the nal energy reached in the previous annealing step. The rate of decrease of the demon capacity was the same used in the canonical annealings for temperature decrease, with the initial demon value determined similarly to the initial annealing temperature: starting from a random solution, 100 prospective moves were analyzed, and the largest cost variation was taken as the demon capacity in the rst annealing step.

3.4. Tabu search In order to avoid getting entrapped in a local minimum, the tabu search algorithm selects, at each step, the best of a certain number of candidate moves (500, in our implementations), even if it leads to a higher cost, in which case the corresponding reverse move is included in a tabu list, to prevent the return to a solution already considered. In our experiments, we worked with a tabu list of 7 moves, following the suggestion of Glover,10 with each new tabu move being included in a random position in the list, so that its interdiction period would also be random. Another feature of our implementations was a so-called aspiration criterium, according to which, if a given tabu move leads to a solution which tops the best one so far encountered, its interdiction is ignored. The tabu search was made to stop at a count of 1500 moves without improvement.

138

A. Linhares & J. R. A. Torreo a

4. Experiments Our performance evaluation of O was based on the solution of several instances of the euclidean TSP, employing a path-reversal dynamics.17 This means that the solution cost was taken as the total tour length measured by the euclidean norm, and that each trial move was a replacement of a randomly selected section of the tour by its reverse. Results obtained with a Pentium 133 processor will be reported here, for the following city distributions: P100: 100 cities organized in a rectangular grid. Such a distribution, also employed by Cerny1 and by Laarhoven and Aarts,14 displays a global minimum which can be easily perceived, and is an example of degenerate topology, allowing many solutions of the same cost. P300: 300 cities randomly distributed in eight distinct clusters along the sides of a square region. The optimal path which is not known a priori must cross each cluster only once. PR76, PR124 and PR423: Congurations of 76, 124 and 423 cities, respectively, proposed by Padberg and Rinaldi, and compiled in the TSPLIB library.18 The corresponding optimal solutions are also shown in the TSPLIB. K200: Conguration of 200 cities proposed by Krolak and also found, along with its optimal solution, in the TSPLIB. In order to appreciate the quality of the solutions yielded by the various algorithms, we considered the distribution of the results obtained in several runs. The frequency histograms of the nal costs for P100 and P300, in 50 executions, are shown in Figs. 1 and 2, where we include the results for the iterative improvement algorithm, which corresponds to implementing only the non-aggressive initialization phase of O. From the gures, the superiority of the microcanonical optimization over the other approaches is apparent, but the tabu search and microcanonical annealing methods also proved to be competitive. SA and Tsallis annealing yielded poorer quality solutions, even though the latter was very fast. Table 1 gives an idea of the average running times involved. It is important to remark that, due to the peculiarities of implementation of each algorithm, some of them tend naturally to prolong their execution in comparison to others. For instance, O and tabu search will only stop after reaching a certain number of iterations without improvement, which means that, even after a long period without any progress, once those algorithms nd a better conguration, they are granted an additional running time (of 1000 iterations for O, and 1500 for tabu search). The same is not true of the annealing strategies, which have their running times linked to xed annealing schedules.

Microcanonical Optimization Applied to the Traveling Salesman Problem

139

Iterative Improvement
80 80

Simulated Annealing
Frequency
60 40 20 0

Frequency

60 40 20 0

3700

3823

3945

3700

3823

3945

Cost

4068

Cost

Microcanonical Annealing
80 80

Tsallis Annealing
Frequency
60 40 20 0

Frequency

60 40 20 0

3700

3823

3945

4068

3700

3823

3945

Cost

Cost

Tabu Search
80 80

Microcanonical Optimization
Frequency

Frequency

60 40 20 0

60 40 20 0

3700

3823

3945

4068

3700

3823

3945

Cost

Cost

Fig. 1. Frequency, in fty runs, of the nal costs obtained for Problem P100.

Iterative Improvement
30 25 30 25

Simulated Annealing
Frequency

Frequency

20 15 10 5 0 2725 3100

20 15 10 5 0 2725 3100 3475 3850

Cost 3475

3850

Cost

30 25

Microcanonical Annealing
Frequency

30 25 20 15 10 5

Tsallis Annealing

Frequency

20 15 10 5 0 2725 3100 3475 3850

0 2725

3100

Cost

Cost

3475

3850

Tabu Search
30 30

Microcanonical Optimization
Frequency
25 20 15 10 5 0 2725 3100 3475 3850

Frequency

25 20 15 10 5 0 2725 3100 3475 3850

Cost

Cost

Fig. 2. Frequency, in fty runs, of the nal costs obtained for Problem P300.

4068

4068

4068

140

A. Linhares & J. R. A. Torreo a Table 1. Average execution time (minutes), in ve runs of O, tabu search, microcanonical annealing (MA), Tsallis annealing, and simulated annealing (SA). Processor: Pentium 133. 0 P100 P300 0:48 2:25 Tabu 0:58 3:29 MA 0:47 2:59 Tsallis 1:05 1:48 SA 2:37 4:40

From such initial results, we have been led to undertake a more careful comparative analysis of O, tabu search and microcanonical annealing. Table 2 summarizes the results obtained in 50 runs for the distributions K200 and PR76. The corresponding graphs of running time versus nal cost for K200 are depicted in Fig. 3. We see that the microcanonical annealing did not show any appreciable variation in execution time, even though it performed quite poorly, in this respect, in problem K200. Tabu search, on the other hand, showed a behavior similar to that of O, a feature which was observed for all congurations where the cities were evenly distributed over the plane, without the formation of well-dened groups. The solutions yielded by O were slightly superior to those generated by the annealing, but required a little longer processing time in K200. A dierent situation was met in problems PR124 and PR439, which share the peculiar characteristic of presenting relatively distant groups of densely packed cities, in a topology quite distinct from the ones previously analyzed. Such topology gives rise to the existence of a large number of local-minimum solutions, diering only in the intra-group sequences of cities, which are very close in cost. In this kind of problem, the intrinsic divide-and-conquer nature of annealing4,6 proves to be quite invaluable, since it allows the initial optimization of the long paths between groups which are dominant in terms of cost leaving the ner details of
Table 2. Average, maximum, and minimum values obtained in 50 runs of O, tabu search, and microcanonical annealing (MA), for problems K200 and PR76. E means cost and t means execution time, in minutes. Processor: Pentium 133. K200 O Tabu MA PR76 O Tabu MA Eavg 30160 30392 30271 Eavg 108357 108736 109418 Emin 29696 29869 29771 Emin 108159 108159 108159 Emax 30941 31010 31009 Emax 109085 109921 111115 tavg 2:00 1:28 8:42 tavg 3:42 5:38 3:32 tmin 0:59 0:52 8:33 tmin 2:34 2:37 3:27 tmax 4:00 2:35 8:48 tmax 5:43 11:43 3:37

Microcanonical Optimization Applied to the Traveling Salesman Problem

141

00:11:31 00:10:05 00:08:38 00:07:12

Microcanonical Optimization

Time

00:05:46 00:04:19 00:02:53 00:01:26 00:00:00 29400

29900

30400

30900

31400

31900

32400

32900

33400

33900

34400

Cost

00:11:31 00:10:05 00:08:38 00:07:12

Microcanonical Annealing

Time

00:05:46 00:04:19 00:02:53 00:01:26 00:00:00 29400

29900

30400

30900

31400

31900

Cost

32400

32900

33400

33900

34400

Tabu Search
00:11:31 00:10:05 00:08:38 00:07:12

Time

00:05:46 00:04:19 00:02:53 00:01:26 00:00:00 29400

29900

30400

30900

31400

31900

32400

32900

33400

33900

34400

Cost

Fig. 3. Execution times versus nal costs obtained in fty runs for K200. Times in minutes.

the intra-group paths for posterior processing. In contrast to that, tabu search, by accepting, at each step, the least expensive move (as long as it is not tabu), restricts itself, most of the time, to short-scale changes in the solutions. Therefore, it has diculty in processing the large-scale corrections of the paths between groups. Similarly, O nds it hard to evolve in such topology, unless the demon parameters are chosen large enough to accomodate large-scale rearrangements. For this reason, in our implementations for PR124 and PR439, instead of the 5th entry in the list of rejected moves, we had to choose, for the demon parameters, the 25th term there. As illustrated in Fig. 4, for PR439, tabu search, which received no special tuning for this particular situation, fared worse in those problems.

142

A. Linhares & J. R. A. Torreo a

Microcanonical Optimization
00:25:55

00:17:17

Time
00:08:38 00:00:00 105000

107000

109000

111000

113000

115000

117000

119000

121000

123000

125000

Cost

Microcanonical Annealing
00:25:55

00:17:17

Time
00:08:38 00:00:00 105000

107000

109000

111000

113000

115000

117000

119000

121000

123000

125000

Cost

Tabu Search
00:25:55

00:17:17

Time
00:08:38 00:00:00 105000

107000

109000

111000

113000

115000

117000

119000

121000

123000

125000

Cost

Fig. 4. Execution times versus nal costs obtained in fty runs for PR439. Times in minutes.

It is interesting, in this respect, to remark that O seems to be more ecient than tabu search in breaking loose from local minimum congurations. The curves in Fig. 5, obtained for problem P300, illustrate this. The plots show the values of the current solution and of the best solution so far encountered, as the algorithms evolve. The tabu heuristic, once in a local minimum, accepts the best of the proposed moves, irrespective of its cost. Since moves which are quite bad can thus be accepted repeatedly, the heuristic tends to stray from the best solution so far obtained. This should be compared to the behavior of O, where the limited capacity of the demon keeps the current and the best solutions always close. This, nevertheless, does not seem to compromise the quality of the overall optimization: the algorithm is able

Microcanonical Optimization Applied to the Traveling Salesman Problem

143

3000 2980 2960 2940 2920 2900 2880 2860 2840 2820 2800 1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81

Tabu Search

Cost

Steps
3000 2980 2960 2940 2920 2900 2880 2860 2840 2820 2800 1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81

Microcanonical Optimization

Cost

Steps

Fig. 5. Comparative evolution of current solution (ne line) and best solution (thick line), at each implementation step, for P300.

Microcanonical Optimization
25 20

Frequency

15 10 5 0 2800 2925 3050 3175 3300

Cost

25 20

Tabu Search

Frequency

15 10 5 0 2825 2950 3075 3200 3325

Cost

Fig. 6. Frequency, in fty runs, of the nal costs obtained for Problem P300, with execution time limited to 3 min.

144

A. Linhares & J. R. A. Torreo a

to nd a way to a near-optimal solution, passing only through intermediary states which are approximately local minima. Finally, since the quality of the nal results is also a function of the execution time, and since O and tabu search obey dierent stopping criteria, we also compared their performance in limited-time implementations. The distributions of results obtained in 50 runs for P300, with a time limit of 3 min, are shown in Fig. 6, which makes clear, once again, the better performance of O. 5. Conclusions We have presented an analysis of the performance of a new heuristic the microcanonical optimization algorithm, O when applied to the euclidean traveling salesman problem. When confronted with alternative approaches to the TSP (simulated annealing, microcanonical annealing, Tsallis annealing and tabu search), O yielded the best overall results in our experiments. We have found it to be consistently faster than simulated annealing and consistently superior, in terms of solution quality, to the Tsallis annealing, even though the latter proved to be an ecient strategy for nding plausible solutions in short running times, as already claimed.3 Microcanonical annealing and tabu search also performed well in our analysis. Due to the adaptive divide-and-conquer nature of the annealing, MA was able to outperform tabu search (though not O), in what concerns the quality of the solutions, in certain problems with highly non-uniform city distributions, which require a scale-dependent processing. In most of the other experiments, tabu search proved itself the closest competitor to O, yielding slightly inferior results in comparable execution times. We conclude that O is a very promising heuristic for combinatorial optimization problems, as demonstrated by its extremely robust and ecient performance in the benchmark application of the TSP.

References
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. V. Cerny, J. Optimization Theory and Applications 45, 41 (1985). J. J. Hopeld and D. W. Tank, Bio. Cyber. 52, 141 (1985). T. J. P. Penna, Phys. Rev. E51, 1 (1995). S. Kirkpatrick, D. C. Gellat, and M. Vecchi, Science 220, 671 (1983). H. Szu and R. Hartley, Phys. Lett. A122, 157 (1987). S. T. Barnard, Int. J. Comp. Vision 3, 17 (1989). J. R. A. Torreo and E. Roe, Phys. Lett. A205, 377 (1995). a J. L. Fernandes and J. R. A. Torreo, in Lecture Notes in Computer Science Proc. a 3rd. Asian Conf. on Computer Vision (Springer-Verlag, Heidelberg, 1998), to appear. S. C. S. Porto, A. M. Barroso, and J. R. A. Torreo, in Proc. 2nd. Methaheuristics a Int. Conf. (INRIA, Sophia-Antipolis, 1997), p. 103. F. Glover, ORSA J. Comp. 1, 190 (1989); ORSA J. Comp. 2, 4 (1990). M. Creutz, Phys. Rev. Lett. 50, 1411 (1983). N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, The J. Chem. Phys. 21, 1087 (1953).

Microcanonical Optimization Applied to the Traveling Salesman Problem

145

13. L. E. Reichl, A Modern Course on Statistical Physics (The University of Texas Press, Austin, 1986). 14. P. J. M. Laarhoven and E. H. L. Aarts, Simulated Annealing: Theory and Applications (Kluwer Academic Publishers, Amsterdam, 1987). 15. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes: The Art of Scientic Computing (Cambridge University Press, Cambridge, 1992). 16. C. Tsallis, J. Stat. Phys. 52, 479 (1988). 17. S. Lin and B. W. Kernighan, Op. Res. 21, 498 (1973). 18. G. Reinelt, ORSA J. Comp. 3, 376 (1991).

Appendix A Here we present the pseudocode for the microcanonical optimization metaheuristic.

O algorithm begin Let maxcycle be the maximum number of iterations without improvement of the solution cost; repeat do Initialization; do Sampling; until (maxcycle is reached) end

Fig. A.1. O algorithm.

procedure Initialization begin Empty list-of-rejected-moves; Let maxinit be the maximum number of consecutive rejected moves; Let s be the starting solution of the initilization phase; num rejmoves 0; while (num rejmoves < maxinit) do begin Choose a move randomly; Call the new solution s ; Compute cost E of solution s; Compute cost E of solution s ; costchange E E if (costchange 0) then begin Put costchange in the list-of-rejected-moves; num rejmoves num rejmoves + 1; end if else begin num rejmoves 0 ss

146

A. Linhares & J. R. A. Torreo a

end else end while end

Fig. A.2. Initialization procedure.

procedure Sampling begin Select DMAX and DI from the list-of-rejected-moves; Let maxsamp be the maximum number of sampling iterations; Let s be the starting solution of the sampling phase; num iter 0; D DI while (num iter < maxsamp) do begin Choose a move randomly; Call the new solution s ; Compute cost E of solution s; Compute cost E of solution s ; costchange E E if (costchange 0) then begin if (D costchange DMAX ) then begin ss; D D costchange; end if end if else {costchange > 0} begin if (D costchange 0) then begin ss; D D costchange; end if end else num iter num iter + 1; end while end

Fig. A.3. Sampling procedure.

You might also like