You are on page 1of 11

Optimization Algorithms

In Optimization algorithms a procedure is executed iteratively by comparing various solutions till an


optimum or a satisfactory solution is found.

Swarm intelligence
Swarm intelligence has become a research interest to many research scientists of related fields in recent
years. Bonabeau has defined the swarm intelligence as any attempt to design algorithms or distributed
problem-solving devices inspired by the collective behavior of social insect colonies and other animal
societies.
The classical example of a swarm is bees swarming around their hive; nevertheless the metaphor can
easily be extended to other systems with a similar architecture. An ant colony can be thought of as a
swarm whose individual agents are ants. Similarly a flock of birds is a swarm of birds.
Two fundamental concepts, self-organization and division of labor, are necessary and sufficient
properties to obtain swarm intelligent behavior such as distributed problem-solving systems that selforganize and adapt to the given environment:

Self-organization can be defined as a set of dynamical mechanisms, which result in structures at


the global level of a system by means of interactions among its low-level components. These
mechanisms establish basic rules for the interactions between the components of the system.
Inside a swarm, there are different tasks, which are performed simultaneously by specialized
individuals. This kind of phenomenon is called division of labor. Simultaneous task performance
by cooperating specialized individuals is believed to be more efficient than the sequential task
performance by unspecialized individuals

Ant Colony Optimization


It is a technique for solving problems which can be expressed as finding good paths through graphs like
each ant tries to find a route between its nest and a food source.
The behavior of each ant in nature

Wander randomly at first, laying down a pheromone trail


If food is found, return to the nest laying down a pheromone trail
If pheromone is found, with some increased probability follow the pheromone trail
Once back at the nest, go out again in search of food
However, pheromones evaporate over time, such that unless they are reinforced by more ants,
the pheromones will disappear.

Other ants follow one of the paths at random, also laying pheromone trails. Since the ants on
the shortest path lay pheromone trails faster, this path gets reinforced with more pheromone,
making it more appealing to future ants.
The ants become increasingly likely to follow the shortest path since it is constantly reinforced
with a larger amount of pheromones. The pheromone trails of the longer paths evaporate.

Algorithm
Paradigm for optimization problem is expressed as finding short paths in a graph.
Scheme:

Construct ant solutions


Define attractiveness , based on experience from previous solutions
Define specific visibility function, , for a given problem (e.g. distance)

Ant walk:

Initialize ants and nodes (states)


Choose next edge probabilistically according to the attractiveness and visibility

Each ant maintains a tabu list of infeasible transitions for that iteration
Update attractiveness of an edge according to the number of ants that pass through
Pheromone update

Parameter 0 <= <=1 is called evaporation rate


Pheromones = long-term memory of an ant colony
o small ->low evaporation ->slow adaptation
o large -> high evaporation -> fast adaptation
new pheromone or usually contains the base attractiveness constant Q and a factor that
you want to optimize (e.g. ) Q/length of tour

General ant colony pseudo code:


Initialize the base attractiveness, , and visibility, , for each edge;
for i < IterationMax do:
for each ant do:
choose probabilistically (based on previous equation) the next state to move into;
add that move to the taboo list for each ant;
repeat until each ant completed a solution;
end;
for each ant that completed a solution do:
update attractiveness for each edge that the ant traversed;
end;
if (local best solution better than global solution)
save local best solution as global solution;
end;
end;
Pseudocode for Travelling Salesman Problem using ACO:
initialize all edges to (small) initial pheromone level 0;
place each ant on a randomly chosen city;
for each iteration do:

do while each ant has not completed its tour:


for each ant do:
move ant to next city by the probability function
end;
end;
for each ant with a complete tour do:
evaporate pheromones;
apply pheromone update;
if (ant ks tour is shorter than the global solution)
update global solution to ant ks tour
end;
end;

Artificial Bee Colony Optimization


The Bees Algorithm is a population-based search algorithm inspired by the natural foraging behavior of
honey bees to find the optimal solution.
The algorithm performs a kind of neighborhood search combined with random search.
Three essential components of forage selection:
Food Sources: The value of a food source depends on many factors such as its proximity to the nest, its
richness or concentration of its energy, and the ease of extracting this energy.
Employed Foragers: They are associated with a particular food source which they are currently
exploiting or are employed at. They carry with them information about this particular source, its
distance and direction from the nest, the profitability of the source and share this information with a
certain probability.
Unemployed Foragers: They are continually at look out for a food source to exploit. There are two types
of unemployed foragers: scouts, searching the environment surrounding the nest for new food sources
and onlookers waiting in the nest and establishing a food source through the information shared by
employed foragers.
The model defines two leading modes of the behavior:
recruitment to a nectar source
the abandonment of a source.

The main steps of the algorithm are given below:


Send the scouts onto the initial food sources
REPEAT
Send the employed bees onto the food sources and determine their nectar amounts
Calculate the probability value of the sources with which they are preferred by the onlooker
bees
Stop the exploitation process of the sources abandoned by the bees
Send the scouts into the search area for discovering new food sources, randomly
Memorize the best food source found so far
UNTIL (requirements are met)
Each cycle of the search consists of three steps: moving the employed and onlooker bees onto the food
sources and calculating their nectar amounts; and determining the scout bees and directing them onto
possible food sources. A food source position represents a possible solution to the problem to be
optimized. The amount of nectar of a food source corresponds to the quality of the solution represented
by that food source. Onlookers are placed on the food sources by using a probability based selection
process. As the nectar amount of a food source increases, the probability value with which the food
source is preferred by onlookers increases, too.
Example Pseudocode for clustering using ABC:

Particle Swarm Optimization (PSO)


This algorithm models the social behavior of bird flocking or fish schooling.
It uses a number of particles that constitute a swarm moving around in the search space looking for the
best solution. Each particle in search space adjusts its flying according to its own flying experience as
well as the flying experience of other particles
The basic concept of the algorithm is to create a swarm of particles which move in the space around
them (the problem space) searching for their goal, the place which best suits their needs given by a
fitness function. A nature analogy with birds is the following: a bird flock flies in its environment looking
for the best place to rest (the best place can be a combination of characteristics like space for all the
flock, food access, water access or any other relevant characteristic).
Algorithm:
As stated before, PSO simulates the behaviors of bird flocking. Suppose the following scenario: a group
of birds are randomly searching food in an area. There is only one piece of food in the area being
searched. All the birds do not know where the food is. But they know how far the food is in each
iteration. So what's the best strategy to find the food? The effective one is to follow the bird which is
nearest to the food.
PSO learned from the scenario and used it to solve the optimization problems. In PSO, each single
solution is a "bird" in the search space. We call it "particle". All of particles have fitness values which are
evaluated by the fitness function to be optimized, and have velocities which direct the flying of the
particles. The particles fly through the problem space by following the current optimum particles.
PSO is initialized with a group of random particles (solutions) and then searches for optima by updating
generations. In every iteration, each particle is updated by following two "best" values. The first one is
the best solution (fitness) it has achieved so far. (The fitness value is also stored.) This value is called
pbest. Another "best" value that is tracked by the particle swarm optimizer is the best value, obtained so
far by any particle in the population. This best value is a global best and called gbest. When a particle
takes part of the population as its topological neighbors, the best value is a local best and is called lbest.
After finding the two best values, the particle updates its velocity and positions with following equation
(a) and (b).
v[] = v[] + c1 * rand() * (pbest[] - present[]) + c2 * rand() * (gbest[] - present[]) (a)
present[] = persent[] + v[] (b)
v[] is the particle velocity, persent[] is the current particle (solution).

pbest[] and gbest[] are defined as stated before. rand () is a random number between (0,1). c1, c2 are
learning factors. usually c1 = c2 = 2.
The pseudo code of the procedure is as follows:
For each particle
Initialize particle
END
Do
For each particle
Calculate fitness value
If the fitness value is better than the best fitness value (pBest) in history
set current value as the new pBest
End
Choose the particle with the best fitness value of all the particles as the gBest
For each particle
Calculate particle velocity according equation (a)
Update particle position according equation (b)
End
While maximum iterations or minimum error criteria is not attained

BAT Algorithm
The Bat Algorithm is based on the echolocation behavior of bats. The capability of echolocation of
microbats is fascinating as these bats can nd their prey and discriminate dierent types of insects even
in complete darkness.
Most microbats are insectivores. Microbats use a type of sonar, called, echolocation, to detect prey,
avoid obstacles, and locate their roosting crevices in the dark. These bats emit a very loud sound pulse
and listen for the echo that bounces back from the surrounding objects. Each ultrasonic burst may last
typically 5 to 20 ms, and microbats emit about 10 to 20 such sound bursts every second. When hunting
for prey, the rate of pulse emission can be sped up to about 200 pulses per second when they y near
their prey. Such short sound bursts imply the fantastic ability of the signal processing power of bats.
As the speed of sound in air is typically v = 340 m/s, the wavelength of the ultrasonic sound bursts with
a constant frequency f is given by
= v/f
(1)
which is in the range of 2mm to 14mm for the typical frequency range from 25kHz to 150 kHz. Such
wavelengths are in the same order of their prey sizes.
Amazingly, the emitted pulse could be as loud as 110 dB, and, fortunately, they are in the ultrasonic
region. The loudness also varies from the loudest when searching for prey and to a quieter base when
homing towards the prey. The travelling range of such short pulses are typically a few metres,
depending on the actual frequencies.
Such echolocation behavior of microbats can be formulated in such a way that it can be associated with
the objective function to be optimized, and this makes it possible to formulate new optimization
algorithms.
If we idealize some of the echolocation characteristics of microbats, we can develop various bat-inspired
algorithms or bat algorithms. For simplicity, we now use the following approximate or idealized rules:
1. All bats use echolocation to sense distance, and they also know the dierence between food/prey
and background barriers in some magical way;
2. Bats y randomly with velocity vi at position xi with a xed frequency fmin, varying wavelength and
loudness A0 to search for prey. They can automatically adjust the wavelength (or frequency) of their
emitted pulses and adjust the rate of pulse emission r [0, 1], depending on the proximity of their
target;
3. Although the loudness can vary in many ways, we assume that the loudness varies from a large
(positive) A0 to a minimum constant value Amin.
Pseudo code of the bat algorithm (BA)Objective function f(x), x = (x1, ..., xd)T
Initialize the bat population xi (i = 1, 2, ..., n) and vi

Dene pulse frequency fi at xi


Initialize pulse rates ri and the loudness Ai
while (t <Max number of iterations)
Generate new solutions by adjusting frequency, and updating velocities and locations/solutions
[equations (2) to (4)]
if (rand > ri)
Select a solution among the best solutions
Generate a local solution around the selected best solution
end if
Generate a new solution by ying randomly
if (rand < Ai & f(xi) < f(x))
Accept the new solutions
Increase ri and reduce Ai
end if
Rank the bats and nd the current best x
end while
Postprocess results and visualization
For simplicity, we can assume f [0, fmax]. We know that higher frequencies have short wavelengths
and travel a shorter distance. For bats, the typical ranges are a few metres. The rate of pulse can simply
be in the range of [0, 1] where 0 means no pulses at all, and 1 means the maximum rate of pulse
emission.
In simulations, we use virtual bats naturally. We have to dene the rules how their positions xi and
velocities vi in a d-dimensional search space are updated. The new solutions xit and velocities vit at time
step t are given by
fi = fmin + (fmax fmin),
(2)
vit = vit-1 + (xit x)fi,
(3)
t
t-1
t
xi = xi + v i ,
(4)
where [0, 1] is a random vector drawn from a uniform distribution. Here x is the current global best
location (solution) which is located after comparing all the solutions among all the n bats. As the product
ifi is the velocity increment, we can use either fi (or i ) to adjust the velocity change while xing the
other factor i (or fi), depending on the type of the problem of interest. In our implementation, we will
use fmin = 0 and fmax = 100, depending the domain size of the problem of interest. Initially, each bat is
randomly assigned a frequency which is drawn uniformly from [fmin, fmax].
For the local search part, once a solution is selected among the current best solutions, a new solution for
each bat is generated locally using random walk
xnew = xold + At,
(5)
t
t
where *1, 1+ is a random number, while A =<A i > is the average loudness of all the bats at this time
step. The update of the velocities and positions of bats have some similarity to the procedure in the
standard particle swarm optimization as fi essentially controls the pace and range of the movement of
the swarming particles. To a degree, BA can be considered as a balanced combination of the standard
particle swarm optimization and the intensive local search controlled by the loudness and pulse rate.

Furthermore, the loudness Ai and the rate ri of pulse emission have to be updated accordingly as the
iterations proceed. As the loudness usually decreases once a bat has found its prey, while the rate of
pulse emission increases, the loudness can be chosen as any value of convenience. For example, we can
use A0 = 100 and Amin = 1. For simplicity, we can also use A0 = 1 and Amin = 0, assuming Amin = 0
means that a bat has just found the prey and temporarily stop emitting any sound. Now we have
At+1i = Ati ,
rt+1i = r0i *1 exp(t)+,
(6)
where and are constants. In fact, is similar to the cooling factor of a cooling schedule in the
simulated annealing *9+. For any 0 < < 1 and > 0, we have
Ati 0,
rti r0i , as t .
(7)
In the simplicity case, we can use = , and we have used = = 0.9 in our simulations. The choice of
parameters requires some experimenting. Initially, each bat should have dierent values of loudness
and pulse emission rate, and this can be achieved by randomization. For example, the initial loudness A0i
[0, 1] if using (6). Their loudness and emission rates will be updated only if the new solutions are
improved, which means that these bats are moving towards the optimal solution.

Glowwarm Swarm Optimisation


The glowworm swarm optimization (GSO) is a swarm intelligence optimization algorithm developed
based on the behavior of glowworms (also known as fireflies or lightning bugs). The behavior pattern of
glowworms which is used for this algorithm is the apparent capability of the glowworms to change the
intensity of the luciferin emission and thus appear to glow at different intensities.
In GSO, each glowworm distributes in the objective function definition space. These glowworms carry
own luciferin respectively, and has the respective field of vision scope called local-decision range. Their
brightness concerns with in the position of objective function value. The brighter the glow, the better is
the position, namely has the good target value. The glow seeks for the neighbor set in the local-decision
range, in the set, a brighter glow has a higher attraction to attract this glow toward this traverse, and
the flight direction each time different will change along with the choice neighbor. Moreover, the localdecision range size will be influenced by the neighbor quantity, when the neighbor density will be low,
glow's policy-making radius will enlarge favors seeks for more neighbors, otherwise, the policy-making
radius reduces. Finally, the majority of glowworm return gathers at the multiple optima of the given
objective function.
Each glowworm i encodes the object function value J( xi(t)) at its current location xi(t) into a luciferin
value li and broadcasts the same within its neighborhood. The set of neighbors Ni(t) of glowworm i
consists of those glowworms that have a relatively higher luciferin value and that are located within a
dynamic decision domain, and updating by formula (1) at each iteration.
Local-decision range update:
(1)

And rdi(t +1) is the glowwormi s local-decision range at the t +1 iteration, rs is the sensor range, nt is the
neighborhood threshold, the parameter affects the rate of change of the neighborhood range.
The number of glow in local-decision range:
(2)
and, xj(t) is the glowworm i s position at the t iteration, lj(t) is the glowworm i s luciferin at the t
iteration.; the set of neighbors of glowworm i consists of those glowworms that have a relatively higher
luciferin value and that are located within a dynamic decision domain whose range rdi is bounded above
by a circular sensor range rs (0 < rdi < rs ).Each glowworm i selects a neighbour j with a probability pij(t)
and moves toward it. These movements that are based only on local information, enable the
glowworms to partition into disjoint subgroups, exhibit a simultaneous taxis-behaviour toward and
eventually co-locate at the multiple optima of the given objective function.

Probability distribution used to select a neighbour:

(3)
Movement update:

(4)
Luciferin-update:
(5)
and li(t) is a luciferin value of glowworm i at the t iteration, (0,1) leads to the reflection of the
cumulative goodness of the path followed by the glowworms in their current luciferin values, the
parameter only scales the function fitness values, J( xi(t)) is the value of test function.

You might also like