You are on page 1of 5

List of metaphor-based metaheuristics

From Wikipedia, the free encyclopedia


Jump to navigationJump to search

A diagram classifying the various kinds of metaheuristics

This is a chronologically ordered list of metaphor-based metaheuristics and swarm


intelligence algorithms, sorted by decade of proposal.

Contents

• 1Algorithms
o 1.11980s-1990s
▪ 1.1.1Simulated annealing (Kirkpatrick et al., 1983)
▪ 1.1.2Ant colony optimization (ACO) (Dorigo, 1992)
▪ 1.1.3Particle swarm optimization (PSO) (Kennedy & Eberhart,
1995)
o 1.22000s
▪ 1.2.1Harmony search (HS) (Geem, Kim & Loganathan, 2001)
▪ 1.2.2Artificial bee colony algorithm (Karaboga, 2005)
▪ 1.2.3Bees algorithm (Pham, 2005)
▪ 1.2.4Imperialist competitive algorithm (Atashpaz-Gargari &
Lucas, 2007)
▪ 1.2.5River formation dynamics (Rabanal, Rodríguez & Rubio,
2007)
▪ 1.2.6Gravitational search algorithm (Rashedi, Nezamabadi-pour
& Saryazdi, 2009)
o 1.32010s
▪ 1.3.1Bat algorithm (Yang, 2010)
▪ 1.3.2Spiral optimization (SPO) algorithm (Tamura & Yasuda
2011, 2016-2017)
▪ 1.3.3Artificial swarm intelligence (Rosenberg, 2014)
• 2Criticism of the metaphor methodology
• 3See also
• 4Notes
• 5References
• 6External links

Algorithms[edit]
This list
is incomplete;
you can help
by adding
missing
items. (August
2016)

1980s-1990s[edit]
Simulated annealing (Kirkpatrick et al., 1983)[edit]
Main article: Simulated annealing

Visualization of simulated annealing solving a three-dimensional travelling salesman problem instance on


120 points

Simulated annealing is a probabilistic algorithm inspired by annealing, a heat


treatment method in metallurgy. It is often used when the search space is discrete
(e.g., all tours that visit a given set of cities). For problems where finding the
precise global optimum is less important than finding an acceptable local optimum
in a fixed amount of time, simulated annealing may be preferable to alternatives
such as gradient descent.
The analogue of the slow cooling of annealing is a slow decrease in the probability
of simulated annealing accepting worse solutions as it explores the solution space.
Accepting worse solutions is a fundamental property of metaheuristics because it
allows for a more extensive search for the optimal solution.
Ant colony optimization (ACO) (Dorigo, 1992)[edit]
Main article: Ant colony optimization algorithms
The ant colony optimization algorithm is a probabilistic technique for solving
computational problems which can be reduced to finding good paths
through graphs. Initially proposed by Marco Dorigo in 1992 in his PhD thesis,[1][2] the
first algorithm was aiming to search for an optimal path in a graph, based on the
behavior of ants seeking a path between their colony and a source of food. The
original idea has since diversified to solve a wider class of numerical problems, and
as a result, several problems[example needed] have emerged, drawing on various aspects
of the behavior of ants. From a broader perspective, ACO performs a model-based
search[3] and shares some similarities with estimation of distribution algorithms.
Particle swarm optimization (PSO) (Kennedy & Eberhart, 1995)[edit]
Main article: Particle swarm optimization
Particle swarm optimization is a computational method that optimizes a problem
by iteratively trying to improve a candidate solution with regard to a given measure
of quality. It solves a problem by having a population of candidate solutions,
dubbed particles, and moving these particles around in the search space according
to simple mathematical formulae[which?] over the particle's position and velocity. Each
particle's movement is influenced by its local best known position, but is also
guided toward the best known positions in the search-space, which are updated as
better positions are found by other particles. This is expected to move the swarm
toward the best solutions.
PSO is originally attributed to Kennedy, Eberhart and Shi[4][5] and was first intended
for simulating social behaviour[6] as a stylized representation of the movement of
organisms in a bird flock or fish school. The algorithm was simplified and it was
observed to be performing optimization. The book by Kennedy and
Eberhart[7] describes many philosophical aspects of PSO and swarm intelligence.
An extensive survey of PSO applications is made by Poli.[8][9] A comprehensive
review on theoretical and experimental works on PSO has been published by
Bonyadi and Michalewicz.[10]
2000s[edit]
Harmony search (HS) (Geem, Kim & Loganathan, 2001)[edit]
Harmony search is a phenomenon-mimicking metaheuristic introduced in 2001 by
Zong Woo Geem, Joong Hoon Kim, and G. V. Loganathan[11] and is inspired by the
improvization process of jazz musicians. In the HS algorithm, a set of possible
solutions is randomly generated (called Harmony memory). A new solution is
generated by using all the solutions in the Harmony memory (rather than just two
as used in GA) and if this new solution is better than the worst solution in Harmony
memory, the worst solution gets replaced by this new solution. The effectiveness
and advantages of HS have been demonstrated in various applications like design
of municipal water distribution networks,[12] structural design,[13] load dispatch
problem in electrical engineering,[14] multi-objective optimization,[15] rostering
problems,[16] clustering,[17] and classification and feature selection.[18][19] A detailed
survey on applications of HS can be found i.[20][21] and applications of HS in data
mining can be found in.[22]
Dennis (2015) claimed that harmony search is a special case of the evolution
strategies algorithm.[23] However, Saka et al. (2016) argues that the structure of
evolution strategies is different from that of harmony search.[24]
Artificial bee colony algorithm (Karaboga, 2005)[edit]
Main article: Artificial bee colony algorithm
Artificial bee colony algorithm is a metaheuristic introduced by Karaboga in
2005[25] which simulates the foraging behaviour of honey bees. The ABC algorithm
has three phases: employed bee, onlooker bee and scout bee. In the employed
bee and the onlooker bee phases, bees exploit the sources by local searches in
the neighbourhood of the solutions selected based on deterministic selection in the
employed bee phase and the probabilistic selection in the onlooker bee phase. In
the scout bee phase, which is analogous to bees abandoning exhausted food
sources in the foraging process, solutions that are not beneficial anymore for
search progress are abandoned, and new solutions are inserted instead to explore
new regions in the search space. The algorithm has a well-
balanced[weasel words] exploration and exploitation ability.[clarification needed]
Bees algorithm (Pham, 2005)[edit]
Main article: Bees algorithm
The bees algorithm was formulated by Pham and his co-workers in 2005[26] and
further refined in 2009.[27] Modelled on the foraging behaviour of honey bees, the
algorithm combines global explorative search with local exploitative search. A small
number of artificial bees (scouts) explores randomly the solution space
(environment) for solutions of high fitness (highly profitable food sources), whilst
the bulk of the population search (harvest) the neighbourhood of the fittest
solutions looking for the fitness optimum. A deterministics recruitment procedure
which simulates the waggle dance of biological bees is used to communicate the
scouts' findings to the foragers, and distribute the foragers depending on the
fitness of the neighbourhoods selected for local search. Once the search in the
neighbourhood of a solution stagnates, the local fitness optimum is considered to
be found, and the site is abandoned.
Imperialist competitive algorithm (Atashpaz-Gargari & Lucas, 2007)[edit]
Main article: Imperialist competitive algorithm
The imperialist competitive algorithm (ICA), like most of the methods in the area
of evolutionary computation, does not need the gradient of the function in its
optimization process. From a specific point of view, ICA can be thought of as the
social counterpart of genetic algorithms (GAs). ICA is the mathematical model and
the computer simulation of human social evolution, while GAs are based on
the biological evolution of species.
This algorithm starts by generating a set of random candidate solutions in the
search space of the optimization problem. The generated random points are called
the initial Countries. Countries in this algorithm are the counterpart
of Chromosomes in GAs and Particles in Particle Swarm Optimization and it is an
array of values of a candidate solution of optimization problem. The cost function of
the optimization problem determines the power of each country. Based on their
power, some of the best initial countries (the countries with the least cost function
value), become Imperialists and start taking control of other countries
(called colonies) and form the initial Empires.[28]
Two main operators of this algorithm are Assimilation and Revolution. Assimilation
makes the colonies of each empire get closer to the imperialist state in the space
of socio-political characteristics (optimization search space). Revolution brings
about sudden random changes in the position of some of the countries in the
search space. During assimilation and revolution, a colony might reach a better
position and then have a chance to take the control of the entire empire and
replace the current imperialist state of the empire.[29]
Imperialistic Competition is another part of this algorithm. All the empires try to win
this game and take possession of colonies of other empires. In each step of the
algorithm, based on their power, all the empires have a chance to take control of
one or more of the colonies of the weakest empire.[28]
The algorithm continues with the mentioned steps (Assimilation, Revolution,
Competition) until a stop condition is satisfied.

You might also like