Professional Documents
Culture Documents
Introduction
• World of metaheuristics is rich and multifaceted
• No. of other successful metaheuristics are available in literature
• Some best known and most widely applied metaheuristics:
• Simulated Annealing (SA) (Cerny´, 1985; Kirkpatrick et al., 1983)
• Tabu Search (TS) (Glover, 1989, 1990; Glover & Laguna, 1997)
• Guided Local Search (GLS) (Voudouris & Tsang, 1995; Voudouris, 1997),
• Greedy Randomized Adaptive Search Procedures (GRASP) (Feo & Resende, 1989, 1995)
• Iterated Local Search (ILS) (Lourenc¸o et al., 2002)
• Evolutionary Computation (EC) (Fogel et al., 1966; Goldberg, 1989; Holland, 1975;
Rechenberg, 1973; Schwefel, 1981)
• Scatter search (Glover, 1977)
Introduction
• Metaheuristics try to avoid generation of poor quality solutions by
introducing general mechanisms that extend problem-specific, single-
run algorithms like:
• Greedy construction heuristics
• Iterative improvement local search
Introduction
• Differences among available metaheuristics are:
• Techniques employed to avoid getting stuck in suboptimal solutions
• Type of trajectory followed in the space of either partial or full solutions
• Distinction based on building Metaheuristic - whether they are
• Constructive (Eg: ACO and GRASP)
• Local search based (All other Metaheuristics)
• Distinction based on Iteration - whether at each iteration they manipulate
• A population of solutions (ACO and EC)
• A single solution (All other Metaheuristics)
• Constructive and population-based metaheuristics like ACO and EC can be used without recurring to
local search
• Performance can be greatly improved if they are extended to include it
• While GRASP includes only local search from beginning
Introduction
• Distinction based on use of memory:
• Metaheuristics that exploit memory to direct future search are TS, GLS, and ACO
• All other Metaheuristics do not exploit memory to direct future search
• TS either explicitly memorizes previously encountered solutions or memorizes components of
previously seen solutions
• GLS stores penalties associated with solution components to modify solutions’ evaluation function
• ACO uses pheromones to maintain a memory of past experiences
• Distinction based on termination criterion:
• There is no general termination criterion for most metaheuristics
• Generally, a number of rules of thumb are used
• Maximum CPU time elapsed
• Maximum number of solutions generated
• Percentage deviation from a lower/upper bound from the optimum
• Maximum number of iterations without improvement in solution quality
• Metaheuristic-dependent rules of thumb:
• TS - Stopped if set of solutions in neighborhood is empty
• SA - Termination condition is often defined by an annealing schedule
Simulated Annealing
• Simulated annealing - SA (Cerny´, 1985; Kirkpatrick et al., 1983)
• Inspired by an analogy between the physical annealing of solids (crystals) and combinatorial
optimization problems
• In the physical annealing process a solid is first melted and then cooled very slowly, spending a
long time at low temperatures, to obtain a perfect lattice structure corresponding to a minimum
energy state
• SA transfers this process to local search algorithms for combinatorial optimization problems*
• Done by associating set of solutions of problem attacked with states of physical system
• Objective function with physical energy of solid
• Optimal solutions with minimum energy states
*subfield of mathematical optimization that consists of finding an optimal object from a finite set of objects
Simulated Annealing
• SA - Local Search strategy
• Tries to avoid local minima by accepting worse solutions with some probability
• SA starts from some initial solutions and then proceeds as follows:
• At each step, a solution s’ ϵ N(s) is generated (often done randomly according to a uniform distribution)
• If s’ improves on s, it is accepted; if s’ is worse than s, then s’ is accepted with a probability which
depends on the difference in objective function value f (s) - f(s’), and on a parameter T, called
temperature
• T is lowered (as is also done in the physical annealing process) during the run of the algorithm, reducing
in this way the probability of accepting solutions worse than the current one
• Probability paccept to accept a solution s’ is often defined according to the Metropolis distribution
(Metropolis, Rosenbluth, Rosenbluth, Teller, & Teller, 1953):
High-level pseudo-code for Simulated Annealing (SA)
To implement an SA algorithm, the following parameters and
functions have to be specified:
• Function GenerateInitialSolution, that generates an initial
solution
• Function InitializeAnnealingParameters that initializes
several parameters used in the annealing schedule
• Parameters comprise:
• Initial temperature T0
• Number of iterations to be performed at each
temperature (inner loop criterion)
• Termination condition (outer loop criterion)
• Function UpdateTemp that returns new value for
temperature
• Function GenerateNeighbor that chooses new solution s0 in
the neighborhood of the current solution s
• Function AcceptSolution that implements equation; it
decides whether to accept solution returned by
GenerateNeighbor or not
Simulated Annealing
• SA - applied to a wide variety of problems with mixed success (Aarts, Korst, & van
Laarhoven, 1997)
• It is of special appeal to mathematicians due to fact that under certain conditions,
convergence of the algorithm to an optimal solution can be proved (Geman &
Geman, 1984; Hajek, 1988; Lundy & Mees, 1986; Romeo & Sangiovanni-
Vincentelli, 1991)
• To guarantee convergence to optimal solution, an impractically slow annealing
schedule has to be used and theoretically an infinite number of states has to be
visited by the algorithm
Tabu Search
• Tabu search (TS) (Glover, 1989, 1990; Glover & Laguna, 1997)
• Relies on systematic use of memory to guide search process
• Distinction between short term memory and long term memory:
• Short-term memory, restricts the neighborhood N(s) of the current solution s
to a subset N’(s) ⊆ N(s), and
• Long-term memory, may extend N(s) through the inclusion of additional
solutions (Glover & Laguna, 1997)
Tabu Search
• TS uses local search
• At every step, TS makes best possible move from s to neighbor solution s’ even if new solution is worse
than current one
• In latter case - move that least worsens objective function is chosen
• To avoid cycling (prevent local search from immediately returning to a previously visited solution), TS
can explicitly memorize recently visited solutions and forbid moving back to them
• More commonly, TS forbids reversing effect of recently applied moves by declaring tabu those solution
attributes that change in local search
• Tabu status of solution attributes is then maintained for a number tt of iterations
• Parameter tt is called the tabu tenure or the tabu list length
• Unfortunately, this may forbid moves toward attractive, unvisited solutions
• To avoid such an undesirable situation, an aspiration criterion is used to override tabu status of
certain moves
• Aspiration criterion drops tabu status of moves leading to better solution than best solution visited so
far
Tabu Search
• Use of short-term memory in search process is most widely applied feature of TS
• TS algorithms that only rely on use of short-term memory are called simple tabu search algorithms in
Glover (1989)
• To increase efficiency of simple TS, long-term memory strategies can be used to:
• Intensify search or
• Diversify search
• Intensification strategies - intended to explore promising regions of search space either by recovering elite
solutions (i.e., the best solutions obtained so far) or attributes of these solutions
• Diversification - to explore of new search space regions through introduction of new attribute combinations
• Long-term memory strategies - based on memorization of frequency of solution attributes
• Glover & Laguna (1997) can referred for detailed discussion of techniques exploiting long-term memory
High-level pseudo-code for a simple tabu search (TS)
Functions needed to define Tabu Search are
following:
• Function GenerateInitialSolution, which
generates an initial solution
• Function InitializeMemoryStructures, which
initializes all memory structures used during the
run of TS algorithm
• Function GenerateAdmissibleSolutions, which is
used to determine subset of neighbor solutions
which are not tabu or are tabu but satisfy
aspiration criterion
• Function SelectBestSolution, which returns best
admissible move
• Function UpdateMemoryStructures, which
updates memory structures
Tabu Search
• Till date, TS appears to be one of the most successful and most widely used
metaheuristics, achieving excellent results for a wide variety of problems (Glover &
Laguna, 1997)
• This efficiency due to a significant fine-tuning effort of a large collection of parameters
and different implementation choices (Hertz, Taillard, & de Werra, 1997)
• Reactive TS, tries to make TS more robust with respect to parameter settings (Battiti &
Tecchiolli, 1994)
• Theoretical proofs about the behavior of TS exist
• Convergence proof for probabilistic TS (Faigle & Kern, 1992)
• Several deterministic variants of TS implicitly enumerate search space and, hence,
find optimal solution in finite time (Hanafi, 2000; Glover & Hanafi, 2002)
Guided Local Search
• Guided local search (Voudouris, 1997; Voudouris & Tsang, 1995)
• Uses of idea of modifying the evaluation function while searching to escape from local optima
• Uses an augmented cost function h(s), h : s ↦ R, which consists of:
• Original objective function f(.)
• Additional penalty terms pni associated with each solution feature I
• Augmented cost function is defined as
h(s) = f(s) + ꙍ . σ𝒏𝒊=𝟏 pni . Ii(s)
Where,
ꙍ - Determines influence of penalties on augmented cost function
n - Number of solution features
pni - Penalty cost associated with solution feature I
Ii(s) - Indicator function that takes the value 1 if the solution feature i is present in the solution s and 0
otherwise
Guided Local Search
• GLS uses augmented cost function for choosing local search moves until it gets trapped in a local optimum ŝ
with respect to h(.)
• At this point, a utility value is computed for each feature using:
ui = Ii(ŝ) . ci/(1+pni)
where, ci is the cost of feature i
• Features with high costs will have high utility
• Utility values are scaled by pni to avoid same high cost features from getting penalized over and over again
and search trajectory from becoming too biased
• Then, penalties of features with maximum utility are incremented and augmented cost function is adapted
by using new penalty values
• Last, local search is continued from ŝ, which will no longer be locally optimal with respect to new augmented
cost function
High-level pseudo-code for guided local search (GLS)
• Functions to be defined for the implementation of
a GLS algorithm are the following:
• Function GenerateInitialSolution, generates an
initial solution
• Function InitializePenalties, initializes penalties of
solution features
• Function ComputeAugmentedObjectiveFunction,
computes new augmented evaluation function
after an update of penalties
• Function LocalSearch, applies a local search
algorithm using augmented evaluation function
• Function UpdatePenalties, updates the penalty
vector when the local search is stuck in a locally
optimal solution
Guided Local Search
• GLS is derived from earlier approaches that dynamically modified evaluation
function during search like breakout method (Morris, 1993) and GENET
(Davenport, Tsang, Wang, & Zhu, 1994)
• GLS has tight connections to other weighting schemes like those used in local
search algorithms for satisfiability problem in propositional logic (SAT) (Selman &
Kautz, 1993; Frank, 1996) or adaptations of Lagrangian methods to local search
(Shang & Wah, 1998)
• In general, algorithms that modify evaluation function at computation time are
becoming more widely used
Iterated Local Search
• Iterated local search – ILS (Lourenco et al., 2002; Martin, Otto, & Felten, 1991)
• Simple and powerful metaheuristic
• Working principle:
• Starting from an initial solution s, a local search is applied
• Once local search is stuck, locally optimal solution ŝ is perturbed by a move in
a neighborhood different from one used by local search
• This perturbed solution s’ is new starting solution for local search that takes it
to new local optimum ŝ’
• Finally, an acceptance criterion decides which of two locally optimal solutions
to select as a starting point for next perturbation step
• Main motivation for ILS is to build a randomized walk in a search space of
local optima with respect to some local search algorithm
High-level pseudo-code for iterated local search (ILS)
• Greedy randomized adaptive search procedures - GRASP (Feo & Resende, 1989, 1995)
• Randomize greedy construction heuristics to allow the generation of a large number of different starting
solutions for applying a local search
• Iterative procedure with two phases:
• Construction phase
• Local search phase
• In the construction phase a solution is constructed from scratch, adding one solution component at a time.
At each step of the construction heuristic, the solution components are ranked according to some greedy
function and a number of the best-ranked components are included in a restricted candidate list; typical
ways of deriving the restricted candidate list are either to take the best γ% of the solution components or to
include all solution components that have a greedy value within some δ% of the best-rated solution
component
• Then, one of the components of the restricted candidate list is chosen randomly, according to a uniform
distribution
• Once a full candidate solution is constructed, this solution is improved by a local search phase
High-level pseudo-code for greedy randomized adaptive search procedures
(GRASP)