You are on page 1of 11

Simulated annealing

Simulated annealing (SA) is a probabilistic technique for


approximating the global optimum of a given function. Specifically,
it is a metaheuristic to approximate global optimization in a large
search space for an optimization problem. For large numbers of
local optima, SA can find the global optima.[1] It is often used when
the search space is discrete (for example the traveling salesman
problem, the boolean satisfiability problem, protein structure
prediction, and job-shop scheduling). For problems where finding
an approximate global optimum is more important than finding a
precise local optimum in a fixed amount of time, simulated
annealing may be preferable to exact algorithms such as gradient
descent or branch and bound.
Simulated annealing can be used to
The name of the algorithm comes from annealing in metallurgy, a solve combinatorial problems. Here
technique involving heating and controlled cooling of a material to it is applied to the travelling
alter its physical properties. Both are attributes of the material that salesman problem to minimize the
depend on their thermodynamic free energy. Heating and cooling length of a route that connects all
the material affects both the temperature and the thermodynamic 125 points.
free energy or Gibbs energy. Simulated annealing can be used for
very hard computational optimization problems where exact
algorithms fail; even though it usually achieves an approximate
solution to the global minimum, it could be enough for many
practical problems.

The problems solved by SA are currently formulated by an


objective function of many variables, subject to several
mathematical constraints. In practice, the constraint can be
penalized as part of the objective function.

Similar techniques have been independently introduced on several


occasions, including Pincus (1970),[2] Khachaturyan et al (1979,[3]
1981[4]), Kirkpatrick, Gelatt and Vecchi (1983), and Cerny
(1985).[5] In 1983, this approach was used by Kirkpatrick, Gelatt Travelling salesman problem in 3D
Jr., Vecchi,[6] for a solution of the traveling salesman problem. They for 120 points solved with simulated
annealing.
also proposed its current name, simulated annealing.

This notion of slow cooling implemented in the simulated annealing


algorithm is interpreted as a slow decrease in the probability of accepting worse solutions as the solution
space is explored. Accepting worse solutions allows for a more extensive search for the global optimal
solution. In general, simulated annealing algorithms work as follows. The temperature progressively
decreases from an initial positive value to zero. At each time step, the algorithm randomly selects a solution
close to the current one, measures its quality, and moves to it according to the temperature-dependent
probabilities of selecting better or worse solutions, which during the search respectively remain at 1 (or
positive) and decrease toward zero.
The simulation can be performed either by a solution of kinetic equations for probability density
functions,[7][8] or by using a stochastic sampling method.[6][9] The method is an adaptation of the
Metropolis–Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic
system, published by N. Metropolis et al. in 1953.[10]

Overview
The state of some physical systems, and the function E(s) to be minimized, is analogous to the internal
energy of the system in that state. The goal is to bring the system, from an arbitrary initial state, to a state
with the minimum possible energy.

Simulated annealing searching for a maximum. The objective here is to get to the
highest point. In this example, it is not enough to use a simple hill climb algorithm,
as there are many local maxima. By cooling the temperature slowly the global
maximum is found.

The basic iteration

At each step, the simulated annealing heuristic considers some neighboring state s* of the current state s,
and probabilistically decides between moving the system to state s* or staying in-state s. These probabilities
ultimately lead the system to move to states of lower energy. Typically this step is repeated until the system
reaches a state that is good enough for the application, or until a given computation budget has been
exhausted.

The neighbours of a state

Optimization of a solution involves evaluating the neighbours of a state of the problem, which are new
states produced through conservatively altering a given state. For example, in the travelling salesman
problem each state is typically defined as a permutation of the cities to be visited, and the neighbors of any
state are the set of permutations produced by swapping any two of these cities. The well-defined way in
which the states are altered to produce neighboring states is called a "move", and different moves give
different sets of neighboring states. These moves usually result in minimal alterations of the last state, in an
attempt to progressively improve the solution through iteratively improving its parts (such as the city
connections in the traveling salesman problem).

Simple heuristics like hill climbing, which move by finding better neighbour after better neighbour and stop
when they have reached a solution which has no neighbours that are better solutions, cannot guarantee to
lead to any of the existing better solutions – their outcome may easily be just a local optimum, while the
actual best solution would be a global optimum that could be different. Metaheuristics use the neighbours of
a solution as a way to explore the solution space, and although they prefer better neighbours, they also
accept worse neighbours in order to avoid getting stuck in local optima; they can find the global optimum if
run for a long enough amount of time.

Acceptance probabilities

The probability of making the transition from the current state to a candidate new state is specified
by an acceptance probability function , that depends on the energies and
of the two states, and on a global time-varying parameter called the temperature. States
with a smaller energy are better than those with a greater energy. The probability function must be
positive even when is greater than . This feature prevents the method from becoming stuck at a local
minimum that is worse than the global one.

When tends to zero, the probability must tend to zero if and to a positive value
otherwise. For sufficiently small values of , the system will then increasingly favor moves that go
"downhill" (i.e., to lower energy values), and avoid those that go "uphill." With the procedure
reduces to the greedy algorithm, which makes only the downhill transitions.

In the original description of simulated annealing, the probability was equal to 1 when
—i.e., the procedure always moved downhill when it found a way to do so, irrespective of the
temperature. Many descriptions and implementations of simulated annealing still take this condition as part
of the method's definition. However, this condition is not essential for the method to work.

The function is usually chosen so that the probability of accepting a move decreases when the difference
increases—that is, small uphill moves are more likely than large ones. However, this requirement
is not strictly necessary, provided that the above requirements are met.

Given these properties, the temperature plays a crucial role in controlling the evolution of the state of
the system with regard to its sensitivity to the variations of system energies. To be precise, for a large , the
evolution of is sensitive to coarser energy variations, while it is sensitive to finer energy variations when
is small.

The annealing schedule

Fast Slow
Example illustrating the effect of cooling schedule on the performance of simulated annealing. The problem is
to rearrange the pixels of an image so as to minimize a certain potential energy function, which causes similar
colors to attract at short range and repel at a slightly larger distance. The elementary moves swap two
adjacent pixels. These images were obtained with a fast cooling schedule (left) and a slow cooling schedule
(right), producing results similar to amorphous and crystalline solids, respectively.
The name and inspiration of the algorithm demand an interesting feature related to the temperature variation
to be embedded in the operational characteristics of the algorithm. This necessitates a gradual reduction of
the temperature as the simulation proceeds. The algorithm starts initially with set to a high value (or
infinity), and then it is decreased at each step following some annealing schedule—which may be specified
by the user, but must end with towards the end of the allotted time budget. In this way, the system is
expected to wander initially towards a broad region of the search space containing good solutions, ignoring
small features of the energy function; then drift towards low-energy regions that become narrower and
narrower, and finally move downhill according to the steepest descent heuristic.

For any given finite problem, the probability that the simulated annealing algorithm terminates with a global
optimal solution approaches 1 as the annealing schedule is extended.[11] This theoretical result, however, is
not particularly helpful, since the time required to ensure a significant probability of success will usually
exceed the time required for a complete search of the solution space.[12]

Pseudocode
The following pseudocode presents the simulated annealing heuristic as described above. It starts from a
state s0 and continues until a maximum of kmax steps have been taken. In the process, the call
neighbour(s) should generate a randomly chosen neighbour of a given state s; the call random(0, 1)
should pick and return a value in the range [0, 1], uniformly at random. The annealing schedule is defined
by the call temperature(r), which should yield the temperature to use, given the fraction r of the time
budget that has been expended so far.

Let s = s0
For k = 0 through kmax (exclusive):

T ← temperature( 1 - (k+1)/kmax )
Pick a random neighbour, snew ← neighbour(s)
If P(E(s), E(snew), T) ≥ random(0, 1):

s ← snew
Output: the final state s

Selecting the parameters


In order to apply the simulated annealing method to a specific problem, one must specify the following
parameters: the state space, the energy (goal) function E(), the candidate generator procedure
neighbour(), the acceptance probability function P(), and the annealing schedule
temperature() AND initial temperature init_temp. These choices can have a significant impact
on the method's effectiveness. Unfortunately, there are no choices of these parameters that will be good for
all problems, and there is no general way to find the best choices for a given problem. The following
sections give some general guidelines.

Sufficiently near neighbour


Simulated annealing may be modeled as a random walk on a search graph, whose vertices are all possible
states, and whose edges are the candidate moves. An essential requirement for the neighbour()
function is that it must provide a sufficiently short path on this graph from the initial state to any state which
may be the global optimum  – the diameter of the search graph must be small. In the traveling salesman
example above, for instance, the search space for n = 20 cities has n! = 2,432,902,008,176,640,000 (2.4

quintillion) states; yet the number of neighbors of each vertex is edges (coming

from n choose 2), and the diameter of the graph is .

Transition probabilities

To investigate the behavior of simulated annealing on a particular problem, it can be useful to consider the
transition probabilities that result from the various design choices made in the implementation of the
algorithm. For each edge of the search graph, the transition probability is defined as the probability
that the simulated annealing algorithm will move to state when its current state is . This probability
depends on the current temperature as specified by temperature(), on the order in which the
candidate moves are generated by the neighbour() function, and on the acceptance probability
function P(). (Note that the transition probability is not simply , because the candidates are
tested serially.)

Acceptance probabilities

The specification of neighbour(), P(), and temperature() is partially redundant. In practice, it's
common to use the same acceptance function P() for many problems, and adjust the other two functions
according to the specific problem.

In the formulation of the method by Kirkpatrick et al., the acceptance probability function was
defined as 1 if , and otherwise. This formula was superficially justified by
analogy with the transitions of a physical system; it corresponds to the Metropolis–Hastings algorithm, in
the case where T=1 and the proposal distribution of Metropolis–Hastings is symmetric. However, this
acceptance probability is often used for simulated annealing even when the neighbour() function,
which is analogous to the proposal distribution in Metropolis–Hastings, is not symmetric, or not
probabilistic at all. As a result, the transition probabilities of the simulated annealing algorithm do not
correspond to the transitions of the analogous physical system, and the long-term distribution of states at a
constant temperature need not bear any resemblance to the thermodynamic equilibrium distribution over
states of that physical system, at any temperature. Nevertheless, most descriptions of simulated annealing
assume the original acceptance function, which is probably hard-coded in many implementations of SA.

In 1990, Moscato and Fontanari,[13] and independently Dueck and Scheuer,[14] proposed that a
deterministic update (i.e. one that is not based on the probabilistic acceptance rule) could speed-up the
optimization process without impacting on the final quality. Moscato and Fontanari conclude from
observing the analogous of the "specific heat" curve of the "threshold updating" annealing originating from
their study that "the stochasticity of the Metropolis updating in the simulated annealing algorithm does not
play a major role in the search of near-optimal minima". Instead, they proposed that "the smoothening of
the cost function landscape at high temperature and the gradual definition of the minima during the cooling
process are the fundamental ingredients for the success of simulated annealing." The method subsequently
popularized under the denomination of "threshold accepting" due to Dueck and Scheuer's denomination. In
2001, Franz, Hoffmann and Salamon showed that the deterministic update strategy is indeed the optimal
one within the large class of algorithms that simulate a random walk on the cost/energy landscape.[15]

Efficient candidate generation

When choosing the candidate generator neighbour(), one must consider that after a few iterations of
the simulated annealing algorithm, the current state is expected to have much lower energy than a random
state. Therefore, as a general rule, one should skew the generator towards candidate moves where the
energy of the destination state is likely to be similar to that of the current state. This heuristic (which is
the main principle of the Metropolis–Hastings algorithm) tends to exclude "very good" candidate moves as
well as "very bad" ones; however, the former are usually much less common than the latter, so the heuristic
is generally quite effective.

In the traveling salesman problem above, for example, swapping two consecutive cities in a low-energy
tour is expected to have a modest effect on its energy (length); whereas swapping two arbitrary cities is far
more likely to increase its length than to decrease it. Thus, the consecutive-swap neighbour generator is
expected to perform better than the arbitrary-swap one, even though the latter could provide a somewhat
shorter path to the optimum (with swaps, instead of ).

A more precise statement of the heuristic is that one should try first candidate states for which
is large. For the "standard" acceptance function above, it means that
is on the order of or less. Thus, in the traveling salesman example above, one could use a
neighbour() function that swaps two random cities, where the probability of choosing a city-pair
vanishes as their distance increases beyond .

Barrier avoidance

When choosing the candidate generator neighbour() one must also try to reduce the number of "deep"
local minima—states (or sets of connected states) that have much lower energy than all its neighbouring
states. Such "closed catchment basins" of the energy function may trap the simulated annealing algorithm
with high probability (roughly proportional to the number of states in the basin) and for a very long time
(roughly exponential on the energy difference between the surrounding states and the bottom of the basin).

As a rule, it is impossible to design a candidate generator that will satisfy this goal and also prioritize
candidates with similar energy. On the other hand, one can often vastly improve the efficiency of simulated
annealing by relatively simple changes to the generator. In the traveling salesman problem, for instance, it is
not hard to exhibit two tours , , with nearly equal lengths, such that (1) is optimal, (2) every sequence
of city-pair swaps that converts to goes through tours that are much longer than both, and (3) can
be transformed into by flipping (reversing the order of) a set of consecutive cities. In this example, and
lie in different "deep basins" if the generator performs only random pair-swaps; but they will be in the
same basin if the generator performs random segment-flips.

Cooling schedule

The physical analogy that is used to justify simulated annealing assumes that the cooling rate is low enough
for the probability distribution of the current state to be near thermodynamic equilibrium at all times.
Unfortunately, the relaxation time—the time one must wait for the equilibrium to be restored after a change
in temperature—strongly depends on the "topography" of the energy function and on the current
temperature. In the simulated annealing algorithm, the relaxation time also depends on the candidate
generator, in a very complicated way. Note that all these parameters are usually provided as black box
functions to the simulated annealing algorithm. Therefore, the ideal cooling rate cannot be determined
beforehand, and should be empirically adjusted for each problem. Adaptive simulated annealing algorithms
address this problem by connecting the cooling schedule to the search progress. Other adaptive approach as
Thermodynamic Simulated Annealing,[16] automatically adjusts the temperature at each step based on the
energy difference between the two states, according to the laws of thermodynamics.

Restarts
Sometimes it is better to move back to a solution that was significantly better rather than always moving
from the current state. This process is called restarting of simulated annealing. To do this we set s and e to
sbest and ebest and perhaps restart the annealing schedule. The decision to restart could be based on
several criteria. Notable among these include restarting based on a fixed number of steps, based on whether
the current energy is too high compared to the best energy obtained so far, restarting randomly, etc.

Related methods
Interacting Metropolis–Hasting algorithms (a.k.a. sequential Monte Carlo[17]) combines
simulated annealing moves with an acceptance-rejection of the best fitted individuals
equipped with an interacting recycling mechanism.
Quantum annealing uses "quantum fluctuations" instead of thermal fluctuations to get
through high but thin barriers in the target function.
Stochastic tunneling attempts to overcome the increasing difficulty simulated annealing runs
have in escaping from local minima as the temperature decreases, by 'tunneling' through
barriers.
Tabu search normally moves to neighbouring states of lower energy, but will take uphill
moves when it finds itself stuck in a local minimum; and avoids cycles by keeping a "taboo
list" of solutions already seen.
Dual-phase evolution is a family of algorithms and processes (to which simulated annealing
belongs) that mediate between local and global search by exploiting phase changes in the
search space.
Reactive search optimization focuses on combining machine learning with optimization, by
adding an internal feedback loop to self-tune the free parameters of an algorithm to the
characteristics of the problem, of the instance, and of the local situation around the current
solution.
Genetic algorithms maintain a pool of solutions rather than just one. New candidate
solutions are generated not only by "mutation" (as in SA), but also by "recombination" of two
solutions from the pool. Probabilistic criteria, similar to those used in SA, are used to select
the candidates for mutation or combination, and for discarding excess solutions from the
pool.
Memetic algorithms search for solutions by employing a set of agents that both cooperate
and compete in the process; sometimes the agents' strategies involve simulated annealing
procedures for obtaining high quality solutions before recombining them.[18] Annealing has
also been suggested as a mechanism for increasing the diversity of the search.[19]
Graduated optimization digressively "smooths" the target function while optimizing.
Ant colony optimization (ACO) uses many ants (or agents) to traverse the solution space and
find locally productive areas.
The cross-entropy method (CE) generates candidates solutions via a parameterized
probability distribution. The parameters are updated via cross-entropy minimization, so as to
generate better samples in the next iteration.
Harmony search mimics musicians in improvisation process where each musician plays a
note for finding a best harmony all together.
Stochastic optimization is an umbrella set of methods that includes simulated annealing and
numerous other approaches.
Particle swarm optimization is an algorithm modelled on swarm intelligence that finds a
solution to an optimization problem in a search space, or model and predict social behavior
in the presence of objectives.
The runner-root algorithm (RRA) is a meta-heuristic optimization algorithm for solving
unimodal and multimodal problems inspired by the runners and roots of plants in nature.
Intelligent water drops algorithm (IWD) which mimics the behavior of natural water drops to
solve optimization problems
Parallel tempering is a simulation of model copies at different temperatures (or
Hamiltonians) to overcome the potential barriers.
Multi-objective simulated annealing algorithms have been used in multi-objective
optimization.[20]

See also
Adaptive simulated annealing
Automatic label placement
Combinatorial optimization
Dual-phase evolution
Graph cuts in computer vision
Intelligent water drops algorithm
Markov chain
Molecular dynamics
Multidisciplinary optimization
Particle swarm optimization
Place and route
Quantum annealing
Traveling salesman problem

References
1. "What is Simulated Annealing?" (https://www.cs.cmu.edu/afs/cs.cmu.edu/project/learn-43/lib/
photoz/.g/web/glossary/anneal.html). www.cs.cmu.edu. Retrieved 2023-05-13.
2. Pincus, Martin (Nov–Dec 1970). "A Monte-Carlo Method for the Approximate Solution of
Certain Types of Constrained Optimization Problems" (https://doi.org/10.1287%2Fopre.18.6.
1225). Journal of the Operations Research Society of America. 18 (6): 967–1235.
doi:10.1287/opre.18.6.1225 (https://doi.org/10.1287%2Fopre.18.6.1225).
3. Khachaturyan, A.: Semenovskaya, S.: Vainshtein B., Armen (1979). "Statistical-
Thermodynamic Approach to Determination of Structure Amplitude Phases". Soviet Physics,
Crystallography. 24 (5): 519–524.
4. Khachaturyan, A.; Semenovskaya, S.; Vainshtein, B. (1981). "The Thermodynamic Approach
to the Structure Analysis of Crystals" (http://scripts.iucr.org/cgi-bin/paper?S05677394810016
30). Acta Crystallographica. A37 (5): 742–754. Bibcode:1981AcCrA..37..742K (https://ui.ads
abs.harvard.edu/abs/1981AcCrA..37..742K). doi:10.1107/S0567739481001630 (https://doi.o
rg/10.1107%2FS0567739481001630).
5. Laarhoven, P. J. M. van (Peter J. M.) (1987). Simulated annealing : theory and applications
(https://www.worldcat.org/oclc/15548651). Aarts, E. H. L. (Emile H. L.). Dordrecht: D. Reidel.
ISBN 90-277-2513-6. OCLC 15548651 (https://www.worldcat.org/oclc/15548651).
6. Kirkpatrick, S.; Gelatt Jr, C. D.; Vecchi, M. P. (1983). "Optimization by Simulated Annealing".
Science. 220 (4598): 671–680. Bibcode:1983Sci...220..671K (https://ui.adsabs.harvard.edu/
abs/1983Sci...220..671K). CiteSeerX 10.1.1.123.7607 (https://citeseerx.ist.psu.edu/viewdoc/
summary?doi=10.1.1.123.7607). doi:10.1126/science.220.4598.671 (https://doi.org/10.112
6%2Fscience.220.4598.671). JSTOR 1690046 (https://www.jstor.org/stable/1690046).
PMID 17813860 (https://pubmed.ncbi.nlm.nih.gov/17813860). S2CID 205939 (https://api.se
manticscholar.org/CorpusID:205939).
7. Khachaturyan, A.; Semenovskaya, S.; Vainshtein, B. (1979). "Statistical-Thermodynamic
Approach to Determination of Structure Amplitude Phases". Sov.Phys. Crystallography. 24
(5): 519–524.
8. Khachaturyan, A.; Semenovskaya, S.; Vainshtein, B. (1981). "The Thermodynamic Approach
to the Structure Analysis of Crystals". Acta Crystallographica. 37 (A37): 742–754.
Bibcode:1981AcCrA..37..742K (https://ui.adsabs.harvard.edu/abs/1981AcCrA..37..742K).
doi:10.1107/S0567739481001630 (https://doi.org/10.1107%2FS0567739481001630).
9. Černý, V. (1985). "Thermodynamical approach to the traveling salesman problem: An
efficient simulation algorithm". Journal of Optimization Theory and Applications. 45: 41–51.
doi:10.1007/BF00940812 (https://doi.org/10.1007%2FBF00940812). S2CID 122729427 (htt
ps://api.semanticscholar.org/CorpusID:122729427).
10. Metropolis, Nicholas; Rosenbluth, Arianna W.; Rosenbluth, Marshall N.; Teller, Augusta H.;
Teller, Edward (1953). "Equation of State Calculations by Fast Computing Machines". The
Journal of Chemical Physics. 21 (6): 1087. Bibcode:1953JChPh..21.1087M (https://ui.adsab
s.harvard.edu/abs/1953JChPh..21.1087M). doi:10.1063/1.1699114 (https://doi.org/10.106
3%2F1.1699114). OSTI 4390578 (https://www.osti.gov/biblio/4390578). S2CID 1046577 (htt
ps://api.semanticscholar.org/CorpusID:1046577).
11. Granville, V.; Krivanek, M.; Rasson, J.-P. (1994). "Simulated annealing: A proof of
convergence". IEEE Transactions on Pattern Analysis and Machine Intelligence. 16 (6):
652–656. doi:10.1109/34.295910 (https://doi.org/10.1109%2F34.295910).
12. Nolte, Andreas; Schrader, Rainer (1997), "A Note on the Finite Time Behaviour of Simulated
Annealing" (http://link.springer.com/10.1007/978-3-642-60744-8_32), Operations Research
Proceedings 1996, Berlin, Heidelberg: Springer Berlin Heidelberg, vol. 1996, pp. 175–180,
doi:10.1007/978-3-642-60744-8_32 (https://doi.org/10.1007%2F978-3-642-60744-8_32),
ISBN 978-3-540-62630-5, retrieved 2023-02-06
13. Moscato, P.; Fontanari, J.F. (1990), "Stochastic versus deterministic update in simulated
annealing", Physics Letters A, 146 (4): 204–208, Bibcode:1990PhLA..146..204M (https://ui.a
dsabs.harvard.edu/abs/1990PhLA..146..204M), doi:10.1016/0375-9601(90)90166-L (https://
doi.org/10.1016%2F0375-9601%2890%2990166-L)
14. Dueck, G.; Scheuer, T. (1990), "Threshold accepting: A general purpose optimization
algorithm appearing superior to simulated annealing", Journal of Computational Physics, 90
(1): 161–175, Bibcode:1990JCoPh..90..161D (https://ui.adsabs.harvard.edu/abs/1990JCoP
h..90..161D), doi:10.1016/0021-9991(90)90201-B (https://doi.org/10.1016%2F0021-9991%2
890%2990201-B), ISSN 0021-9991 (https://www.worldcat.org/issn/0021-9991)
15. Franz, A.; Hoffmann, K.H.; Salamon, P (2001), "Best optimal strategy for finding ground
states", Physical Review Letters, 86 (3): 5219–5222, doi:10.1103/PhysRevLett.86.5219 (http
s://doi.org/10.1103%2FPhysRevLett.86.5219), PMID 11384462 (https://pubmed.ncbi.nlm.ni
h.gov/11384462)
16. De Vicente, Juan; Lanchares, Juan; Hermida, Román (2003). "Placement by thermodynamic
simulated annealing". Physics Letters A. 317 (5–6): 415–423.
Bibcode:2003PhLA..317..415D (https://ui.adsabs.harvard.edu/abs/2003PhLA..317..415D).
doi:10.1016/j.physleta.2003.08.070 (https://doi.org/10.1016%2Fj.physleta.2003.08.070).
17. Del Moral, Pierre; Doucet, Arnaud; Jasra, Ajay (2006). "Sequential Monte Carlo samplers".
Journal of the Royal Statistical Society, Series B. 68 (3): 411–436. arXiv:cond-mat/0212648
(https://arxiv.org/abs/cond-mat/0212648). doi:10.1111/j.1467-9868.2006.00553.x (https://doi.
org/10.1111%2Fj.1467-9868.2006.00553.x). S2CID 12074789 (https://api.semanticscholar.o
rg/CorpusID:12074789).
18. Moscato, Pablo (June 1993). "An introduction to population approaches for optimization and
hierarchical objective functions: A discussion on the role of tabu search". Annals of
Operations Research. 41 (2): 85–121. doi:10.1007/BF02022564 (https://doi.org/10.1007%2F
BF02022564). S2CID 35382644 (https://api.semanticscholar.org/CorpusID:35382644).
19. Moscato, P. (1989). "On Evolution, Search, Optimization, Genetic Algorithms and Martial
Arts: Towards Memetic Algorithms". Caltech Concurrent Computation Program (report 826).
20. Deb, Bandyopadhyay (June 2008). "A Simulated Annealing-Based Multiobjective
Optimization Algorithm: AMOSA". IEEE Transactions on Evolutionary Computation. 12 (3):
269–283. doi:10.1109/TEVC.2007.900837 (https://doi.org/10.1109%2FTEVC.2007.900837).
S2CID 12107321 (https://api.semanticscholar.org/CorpusID:12107321).

Further reading
A. Das and B. K. Chakrabarti (Eds.), Quantum Annealing and Related Optimization Methods
(ftp://nozdr.ru/biblio/kolxoz/M/MP/Das%20A.,%20Chakrabarti%20B.K.%20(eds.)%20Quantu
m%20Annealing%20and%20Related%20Optimization%20Methods%20(LNP0679,%20Spri
nger,%202005)(384s)_MP_.pdf), Lecture Note in Physics, Vol. 679, Springer, Heidelberg
(2005)
Weinberger, E. (1990). "Correlated and uncorrelated fitness landscapes and how to tell the
difference". Biological Cybernetics. 63 (5): 325–336. doi:10.1007/BF00202749 (https://doi.or
g/10.1007%2FBF00202749). S2CID 851736 (https://api.semanticscholar.org/CorpusID:8517
36).
Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 10.12. Simulated
Annealing Methods" (http://apps.nrbook.com/empanel/index.html#pg=549). Numerical
Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press.
ISBN 978-0-521-88068-8.
Strobl, M.A.R.; Barker, D. (2016). "On simulated annealing phase transitions in phylogeny
reconstruction" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4912009). Molecular
Phylogenetics and Evolution. 101: 46–55. doi:10.1016/j.ympev.2016.05.001 (https://doi.org/1
0.1016%2Fj.ympev.2016.05.001). PMC 4912009 (https://www.ncbi.nlm.nih.gov/pmc/articles/
PMC4912009). PMID 27150349 (https://pubmed.ncbi.nlm.nih.gov/27150349).
V.Vassilev, A.Prahova: "The Use of Simulated Annealing in the Control of Flexible
Manufacturing Systems", International Journal INFORMATION THEORIES &
APPLICATIONS, VOLUME 6/1999 (http://www.foibg.com/ijita/vol01-09/ijita-fv06.htm)

External links
Simulated Annealing (http://www.heatonresearch.com/aifh/vol1/tsp_anneal.html) A
Javascript app that allows you to experiment with simulated annealing. Source code
included.
"General Simulated Annealing Algorithm" (http://www.mathworks.com/matlabcentral/fileexch
ange/loadFile.do?objectId=10548&objectType=file) An open-source MATLAB program for
general simulated annealing exercises.
Self-Guided Lesson on Simulated Annealing A Wikiversity project.
Google in superposition of using, not using quantum computer (https://arstechnica.com/scien
ce/news/2009/12/uncertainty-hovers-over-claim-googles-using-quantum-computer.ars) Ars
Technica discusses the possibility that the D-Wave computer being used by Google may, in
fact, be an efficient simulated annealing co-processor.
[1] (https://ieeexplore.ieee.org/document/4358775) A Simulated Annealing-Based
Multiobjective Optimization Algorithm: AMOSA.

Retrieved from "https://en.wikipedia.org/w/index.php?title=Simulated_annealing&oldid=1164524191"

You might also like