You are on page 1of 20

SWARM INTELLIGENCE 2010-2011

Chapter 1

Introduction
Swarm Intelligence (SI) is an innovative distributed intelligent paradigm for solving
optimization problems that originally took its inspiration from the biological examples
by swarming, flocking and herding phenomena in vertebrates. SI systems are typically
made up of a population of simple agents interacting locally with one another and
with their environment. The agents follow very simple rules, and although there is no
centralized control structure dictating how individual agents should behave, local, and
to a certain degree random, interactions between such agents lead to the emergence of
"intelligent" global behaviour, unknown to the individual agents. Natural examples of
SI include ant colonies, bird flocking, animal herding, bacterial growth, and fish
schooling.

Particle Swarm Optimization (PSO) incorporates swarming behaviours observed in


flocks of birds, schools of fish, or swarms of bees, and even human social behaviour,
from which the idea is emerged. PSO is a population-based optimization tool, which
could be implemented and applied easily to solve various function optimization
problems, or the problems that can be transformed to function optimization problems.
As an algorithm, the main strength of PSO is its fast convergence, which compares
favourably with many global optimization algorithms like Genetic Algorithms (GA),
and other global optimization algorithms. For applying PSO successfully, one of the
key issues is finding how to map the problem solution into the PSO particle, which
directly affects its feasibility and performance.

In past several years, PSO has been successfully applied in many research and
application areas. It is demonstrated that PSO gets better results in a faster, cheaper
way compared with other methods. Another reason that PSO is attractive is that there
are few parameters to adjust. One version, with slight variations, works well in a wide
variety of applications. Particle swarm optimization has been used for approaches that
can be used across a wide range of applications, as well as for specific applications
focused on a specific requirement.

Electronics & Communication, SIT, Tumkur-3 Page 1


SWARM INTELLIGENCE 2010-2011

1.1 Literature survey:


C. Reynolds, “Flocks, herds, and schools: A distributed behavioural model," Comp.
Graph, vol. 21, no. 4, pp. 25-34, 1987 says that a long time ago, people discovered the
variety of the interesting insect or animal behaviours in the nature. A flock of birds
sweeps across the sky. A group of ants forages for food. A school of fish swims,
turns, bees together, etc. We call this kind of aggregate motion \swarm behaviour."
Recently biologists, and computer scientists in the field of “artificial life" have studied
how to model biological swarms to understand how such “social animals" interact,
achieve goals, and evolve. Moreover, engineers are increasingly interested in this kind
of swarm behaviour since the resulting “swarm intelligence" can be applied in
optimization (e.g. in telecommunicate systems, unmanned vehicle system) robotics,
traffic patterns in transportation systems, and military applications.A high-level view
of a swarm suggests that the N agents in the swarm are cooperating to achieve some
purposeful behaviour and achieve some goal. This apparent “collective intelligence"
seems to emerge from what are often large groups of relatively simple agents. The
agents use simple local rules to govern their actions and via the interactions of the
entire group, the swarm achieves its objectives. A type of “self-organization" emerges
from the collection of actions of the group.

Yang Liu, Kevin M. Passino proposed that swarm intelligence is the emergent
collective intelligence of groups of simple autonomous agents. Here, an autonomous
agent is a subsystem that interacts with its environment, which probably consists of
other agents, but acts relatively independently from all other agents. The autonomous
agent does not follow commands from a leader, or some global plan . For example,
for a bird to participate in a flock, it only adjusts its movements to coordinate with the
movements of its flock mates, typically its “neighbours" that are close to it in the
flock. A bird in a flock simply tries to stay close to its neighbours, but avoid collisions
with them. Each bird does not take commands from any leader bird since there is no
lead bird. Any bird can y in the front, centre and back of the swarm. Swarm behaviour
helps birds take advantage of several things including protection from predators
(especially for birds in the middle of the flock), and searching for food (essentially
each bird is exploiting the eyes of every other bird)

Electronics & Communication, SIT, Tumkur-3 Page 2


SWARM INTELLIGENCE 2010-2011

Although many studies on swarm intelligence have been presented, there are no
general criteria to evaluate a swarm intelligent system's performance. Fukuda et al. try
to make an evaluation based on the flexibility, which is essentially a robustness
property. Fukuda proposed measures of fault tolerance and local superiority as
indices, compared two swarm intelligent systems via simulation with respect to these
two indices. There is a significant for more analytical studies.

Yang Liu, Kevin M. Passino came up with several basic principles for swarm
intelligence, such as the proximity, quality, response diversity, adaptability, and
stability. Stability is a basic property of swarms since if it is not present, and then it is
typically impossible for the swarm to achieve any other objective. Stability
characterizes the cohesiveness of the swarm as it moves. How to define
mathematically if swarms are stable? Relative velocity and distance of adjacent
members in a group can be applied as criteria. Also, no matter whether it is a
biological or mechanical swarm, there must exist some attractant and repellent
profiles in the environment so that the group can move so as to seek attractants and
avoid repellents. We can analyze the stability of swarm by observing whether swarms
stay cohesive and converge to equilibrium points of a combined attractant/repellent
profile.

Jin et al. proposed the stability analysis of synchronized distributed control of 1-D and
2-D swarm structures. He proved that synchronized swarm structures are stable in the
sense of Lyapunov. with appropriate weights in the sum of adjacent errors if the
vertical disturbances vary sufficiently more slowly than the response time of the servo
systems of the agents. The convergence under total asynchronous distributed control
is still an open problem. Convergence of simple asynchronous distributed control can
be proven in a way similar to the convergence of discrete Hopfield neural network.
Beni. proposed a sufficient condition for the asynchronous convergence of a linear
swarm to a synchronously achievable configuration since a large class of distributed
robotic systems self-organizing tasks can be mapped into reconfigurations of patterns
in swarms. The model and stability analysis is however, quite similar to the model and
proof of stability for the load balancing problem in computer networks

Electronics & Communication, SIT, Tumkur-3 Page 3


SWARM INTELLIGENCE 2010-2011

Chapter 2

Swarm Intelligence
Swarm intelligence is the discipline that deals with natural and artificial systems
composed of many individuals that coordinate using decentralized control and self-
organization. In particular, the discipline focuses on the collective behaviours that
result from the local interactions of the individuals with each other and with their
environment. Examples of systems studied by swarm intelligence are colonies of ants
and termites, schools of fish, flocks of birds, herds of land animals. Some human
artefacts also fall into the domain of swarm intelligence, notably some multi-robot
systems, and also certain computer programs that are written to tackle optimization
and data analysis problems.

The typical swarm intelligence system composed of many individuals. The


individuals are relatively homogeneous (i.e., either all identical or belong to a few
typologies). The interactions among the individuals are based on simple behavioural
rules that exploit only local information that the individuals exchange directly or via
the environment .The overall behaviour of the system results from the interactions of
individuals with each other and with their environment, that is, the group behaviour
self-organizes. The characterizing property of a swarm intelligence system is its
ability to act in a coordinated way without the presence of a coordinator or of an
external controller

2.1 Particle Swarm Optimization

Particle swarm optimization(PSO) belongs to the class of swarm intelligence


techniques that are used to solve optimization problems. PSO optimization is
a population based stochastic optimization technique for the solution of
continuous optimization problems. It is inspired by social behaviours in
flocks of birds and schools of fish. In particle swarm optimization, simple
software agents, called particles, move in the search space of an optimization
problem. The position of a particle represents a candidate solution to the

Electronics & Communication, SIT, Tumkur-3 Page 4


SWARM INTELLIGENCE 2010-2011

optimization problem at hand. Each particle searches for better positions in


the search space by changing its velocity according to rules originally
inspired by behavioural models of bird flocking.

In practice, in the initialization phase each particle is given a random initial position
and an initial velocity. The position of the particle represents a solution of
the problem and has therefore a value, given by the objective function. While
moving in the search space, particles memorize the position of the best
solution found. At each iteration of the algorithm, each particle moves with a
velocity that is a weighted sum of three components: the old velocity, a
velocity component that drives the particle towards the location in the search
space where it previously found the best solution so far, and a velocity
component that drives the particle towards the location in the search space
where the neighbour particles found the best solution so far.

2.1.1 Canonical Model

The canonical PSO model consists of a swarm of particles, which are initialized with
a population of random candidate solutions. A basic variant of the PSO algorithm
works by having a population (called a swarm) of candidate solutions (called
particles). These particles are moved around in the search-space according to a well
defined procedure. The movements of the particles are guided by their own best
known position in the search-space as well as the entire swarm's best known position.
When improved positions are being discovered these will then come to guide the
movements of the swarm. The process is repeated and by doing so it is hoped, but not
guaranteed, that a satisfactory solution will eventually be discovered.

Formally, let f: ℝn → ℝ be the fitness or cost function which must be minimized. The
function takes a candidate solution as argument in the form of a vector of real
numbers and produces a real number as output which indicates the fitness of the given
candidate solution. The gradient of f is not known. The goal is to find a solution a for
which f(a) ≤ f(b) for all b in the search-space, which would mean a is the global
minimum. Let S be the number of particles in the swarm, each having a position xi ∈
ℝn in the search-space and a velocity vi ∈ ℝn. Let pi be the best known position of

Electronics & Communication, SIT, Tumkur-3 Page 5


SWARM INTELLIGENCE 2010-2011

particle i and let g be the best known position of the entire swarm. A basic PSO
algorithm is then:

• For each particle i = 1, ..., S do:


o Initialize the particle's position with a uniformly distributed random
vector: xi ~ U(blo, bup), where blo and bup are the lower and upper
boundaries of the search-space.
o Initialize the particle's best known position to its initial position:
pi ← xi
o If (f(pi) < f(g)) update the swarm's best known position: g ← pi
o Initialize the particle's velocity: vi ~ U(-|bup-blo|, |bup-blo|)
• Until a termination criterion is met (e.g. number of iterations performed, or
adequate fitness reached), repeat:
o For each particle i = 1, ..., S do:
 Pick random numbers: rp, rg ~ U(0,1)
 Update the particle's velocity:

vi = ω vi + φp rp (pi-xi) + φg rg (g-xi) (2.1)

 Update the particle's position:

xi =xi+vi (2.2)

 If (f(xi) < f(pi)) do:


 Update the particle's best known position: pi ← xi
 If (f(pi) < f(g)) update the swarm's best known position:
g ← pi
• Now g holds the best found solution.

The parameters ω, φp, and φg are selected by the practitioner and control
the behaviour and efficacy of the PSO method

2.1.2 The Parameters of PSO

The choice of PSO parameters can have a large impact on optimization performance.
Selecting PSO parameters that yield good performance has therefore been the subject

Electronics & Communication, SIT, Tumkur-3 Page 6


SWARM INTELLIGENCE 2010-2011

of much research. The role of inertia weight ω, in Eq.(2.1), is considered critical for
the convergence behaviour of PSO. The inertia weight is employed to control the
impact of the previous history of velocities on the current one. Accordingly, the
parameter ‘ω’ regulates the trade-off between the global (wide-ranging) and local
(nearby) exploration abilities of the swarm. A large inertia weight facilitates global
exploration (searching new areas), while a small one tends to facilitate local
exploration, i.e. fine-tuning the current search area. A suitable value for the inertia
weight ω usually provides balance between global and local exploration abilities and
consequently results in a reduction of the number of iterations required to locate the
optimum solution. Initially, the inertia weight is set as a constant. However, some
experiment results indicates that it is better to initially set the inertia to a large value,
in order to promote global exploration of the search space, and gradually decrease it to
get more refined solutions. Thus, an initial value around 1.2 and gradually reducing
towards 0 can be considered as a good choice for ω. A better method is to use some
adaptive approaches (example: fuzzy controller), in which the parameters can be
adaptively fine tuned according to the problems under consideration. The parameters
φp and φg, in Eq.(2.1), are not critical for the convergence of PSO. However, proper
fine-tuning may result in faster convergence and alleviation of local minima. As
default values, usually, φp = φg = 2 are used, but some experiment results indicate that
φp = φg = 1.49 might provide even better results. Recent work reports that it might be
even better to choose a larger cognitive parameter, φp, than a social parameter, φg, but
with φp + φg ≤ 4.

The particle swarm algorithm can be described generally as a population of vectors


whose trajectories oscillate around a region which is defined by each individual’s
previous best success and the success of some other particle.Various methods have
been used to identify some other particle to influence the individual. Eberhart and
Kennedy called the two basic methods as “gbest model” and “lbest model” .In the
lbest model, particles have information only of their own and their nearest array
neighbors’ best (lbest), rather than that of the entire group. Namely, in Eq.(2.3), gbest
is replaced by lbest in the model. So a new neighborhood relation is defined for the
swarm:
vid(t+1)=ω*vid(t)+φp*rp(pid(t)−xid(t))+ φg*rg* (pld(t)−xid(t)). (2.3)
xid(t + 1) = xid(t) + vid(t + 1). (2.4)

Electronics & Communication, SIT, Tumkur-3 Page 7


SWARM INTELLIGENCE 2010-2011

In the gbest model, the trajectory for each particle’s search is influenced by the best
point found by any member of the entire population. The best particle acts as an
attractor, pulling all the particles towards it. Eventually all particles will converge to
this position. The lbest model allows each individual to be influenced by some smaller
number of adjacent members of the population array. The particles selected to be in
one subset of the swarm have no direct relationship to the other particles in the other
neighbourhood. Typically lbest neighbourhoods comprise exactly two neighbours.
When the number of neighbours increases to all but itself in the lbest model, the case
is equivalent to the gbest model. Some experiment results testified that gbest model
converges quickly on problem solutions but has a weakness for becoming trapped in
local optima, while lbest model converges slowly on problem solutions but is able to
“flow around” local optima, as the individuals explore different regions. The gbest
model is recommended strongly for unimodal objective functions, while a variable
neighbourhood model is recommended for multimodal objective functions.

Various population topologies on the PSO performance studied. Different concepts


for neighbourhoods could be envisaged. It can be observed as a spatial neighborhood
when it is determined by the Euclidean distance between the positions of two
particles, or as a sociometric neighbourhood (e.g. the index position in the storing
array). The different concepts for neighbourhood leads to different neighbourhood
topologies. Different neighbourhood topologies primarily affect the communication
abilities and thus the group’s performance. Different topologies are illustrated in Fig.
2.1. In the case of a global neighbourhood, the structure is a fully connected network
where every particle has access to the others’ best position as shown in fig Fig. 2.1(a).
But in local neighbourhoods there are more possible variants. In the von Neumann
topology shown in fig Fig. 2.1(b), neighbours above, below, and each side on a two
dimensional lattice are connected. Fig. 2.1(e) illustrates the von Neumann topology
with one section flattened out. In a pyramid topology, three dimensional wire frame
triangles are formulated as illustrated in Fig. 2.1(c). As shown in Fig. 2.1(d), one
common structure for a local neighbourhood is the circle topology where individuals
are far away from others (in terms of graph structure, not necessarily distance) and are
independent of each other but neighbours are closely connected. Another structure is
called wheel (star) topology and has a more hierarchical structure, because all
members of the neighbourhood are connected to a ‘leader’ individual as shown in Fig.

Electronics & Communication, SIT, Tumkur-3 Page 8


SWARM INTELLIGENCE 2010-2011

2.1(f). So all information has to be communicated though this ‘leader’, which then
compares the performances of all others.

Fig. 2.1. Neighbourhood topologies adapted

Electronics & Communication, SIT, Tumkur-3 Page 9


SWARM INTELLIGENCE 2010-2011

Chapter 3:

Unmanned Vehicle Navigation Using Swarm Intelligence

Unmanned vehicles are used to explore physical areas where humans are unable to go
due to different constraints. There have been various algorithms that have been used
to perform this task. A set of randomized unmanned vehicles are deployed to locate a
single target. Then the randomized unmanned vehicles are deployed to locate various
targets and are then converged at one of targets of a particular interest. Each of the
targets carries transmits some information which draws the attention of the
randomized unmanned vehicles to the target of interest. The Particle Swarm
Optimization (PSO) has been applied for solving this problem. Results have shown
that the PSO algorithm converges the unmanned vehicles to the target of particular
interest successfully and quickly.

Autonomous unmanned vehicles have generated much interest in recent years due to
their ability to perform relatively difficult tasks in hazardous or remote environments.
Different stochastic iterative search methods have been investigated for optimization
of continuous non-linear functions. Various algorithms like evolutionary
computations, genetic algorithms, adaptive cultural models etc. have been used to
perform this task.The task has two parts. In the first part, a set of randomized
unmanned vehicles are deployed to locate a single target. In the second part, the
randomized unmanned vehicles are deployed to locate various targets and are then
converged at one of targets of a particular interest. Each of the targets carries
transmits some information which draws the attention of the randomized unmanned
vehicles to the target of interest. A study has been done on the effect of the number of
particles in the swarm and the number of iterations required for converging them at
the target(s). The effects of changing some of the PSO parameters on the results have
been also studied.

Electronics & Communication, SIT, Tumkur-3 Page 10


SWARM INTELLIGENCE 2010-2011

The concept is that each particle randomly searches through the problem space by
updating itself with its own memory and the social information gathered from other
particles. Within a defined problem space, the system has a population of particles.
Each particle is randomized with a velocity and ‘flown’ in the problem space. The
particles have a memory and keep track of the previous best position (Pbest) with
respect to the target. Thus each ‘Pbest’ is related to a particular particle. The best
value of all these ‘Pbest’s’ is defined as the global best position ‘Gbest’ with respect
to the target. Therefore each particle has its own ‘Pbest’ and the whole swarm has one
‘Gbest’. The velocities and positions of the particles are constantly updated until all
converged at the target. The basic PSO velocity and position update equations are
given by (3.1) and (3.2). These are called the quality factors

(3.1)

(3.2)
where
Vnew - New velocity calculated for each particle
Vold - - Velocity of the particle from the previous iterations
Pnew - New position calculated for each particle
Pold - Position of the particle from the previous iteration
Pbest - Particle’ best position
Gbest - The best position a particle attained in the whole population/swarm
Wi -Inertial weight constant
c1&c2 –weight for the term dependent on particles possition

The population responds to the factors ‘Pbest’ and ‘Gbest’ in order to find a better
position. The particles are drawn towards the position of their own previous best
performance and the best performance of a particle in the group. The procedure for
the implementation of PSO involves the following basic steps.
(i) Define the problem space with its boundaries.
(ii) Initialize an array of particles with random positions and velocities. These random
positions are initially assigned to be the of the particles. Also initialize the target(s)
position(s).

Electronics & Communication, SIT, Tumkur-3 Page 11


SWARM INTELLIGENCE 2010-2011

(iii) Evaluate the desired fitness function of the particles in step (ii). In this case, the
Euclidean distance from the target. Select the Gbest from the the Pbest of the
particles.
(iv) Compute the particles’ new velocities and positions using (3.1) and (3.2)
respectively.
(v) Check if the particles are within the problem space. If the particles are not within
the problem space, then the velocity is set to the maximum velocity pre-defined) and
the particle’s new position is set to its previous best position.
(vi) Calculate the new fitness function for all the particles’ new positions. Determine
the particles’ new Pbest. compare with the particles’ previous Pbest and update the
value with the new one if necessary.
(vi) Calculate the new global best position Gbest among all the particles’ new Pbest.
Compare with the previous best and update the global best before the next iteration.
(viii) The steps (iv) to (vii) are repeated until all the particles have attained their
desired fitness.
The differences between particles’ positions with respect to the global best (Gbest)n d
the respective particle’s best (Pbest) are weighted by the constants cl and c2 and a
random number between 0 and 1.

3.1 Target search


This model investigates two types of search problems. The first type involving a
single target location and the second type involving two targets but only one target is
of interest. Figures 3.1 and 3.2 show the graphical representation of the two cases
studied. In Figure 3.2, the darker target is the one to be located by the particles. The
particles have all been shown at random locations.

Figure 3.1. Randomized particles and a target in the search space.

Electronics & Communication, SIT, Tumkur-3 Page 12


SWARM INTELLIGENCE 2010-2011

Figure 3.2. Randomized particles and two targets in the two neighbourhoods in the
search space. The darker target is the only of interest in the search.

A MATLAB program was developed for implementing the PSO algorithm. The:
inertial weight wi was taken to be 0.8 and 0.6 given the dynamic range for wi to be 0.2
to 1.2. The acceleration constants cl and c2 are taken to be 2, but the study has been
carried out for different values of these constants. The other parameters that need to
be defined are the domain within which the search is to be carried out and the
maximum allowable velocity for the particles needs to be fixed. For the simulation,
the’search space was taken as 10 units and the maximum velocity was limited to 2
units. The initial position and velocity for the particles were randomly generated. The
successive new velocities and positions were calculated using the equations given in
(3.1) and (3.2) respectively. Initially the particles’ best position Pbest is the same as
the initial random positions. The initial global best Gbest is calculated from the initial
Pbest. This is done by calculating the: Euclidean distance of the particles with the
target and then searching through this array for the minimum value. The co-ordinates
corresponding to this minimum value is the global best. Within a loop the algorithm
calculates the new velocity depending on the parameters passed to it from the
previous iteration. The new positions of the particles depend on the current velocity of
the particle. After updating the position for every particle, the particles’ best position
and the global best position need to be recalculated. The various parameters that can
be manipulated for different results are the number of particles, the problem space, the
weights and the number of iterations. By increasing the problem space, the relative
number of iterations required to reach the target reduced.

Electronics & Communication, SIT, Tumkur-3 Page 13


SWARM INTELLIGENCE 2010-2011

In the second part of the simulation, two targets have been chosen in the space. Each
target has been assigned a parameter I, which describes its intensity. The particles
deployed here need to be divided into the neighborhoods. For simplicity, the particles
have been randomized separately within the local neighborhoods. The procedure for
the implementation of PSO involves the following basic steps:
(i) Define the problem space and its boundaries. Also define the intensities of the
targets.
(ii) Divide the space into local neighborhoods depending on the number of the targets
(in this case 2).
(iii) Randomize the particles’ position in the individual neighbourhoods and their
velocities
(iv) Perform steps (iii) to (viii) of the general PSO described in the previous section
above for both the targets individually. In this case the global best ‘Gbest’ is replaced
by local best ‘Lbest’
(v) After the respective particles have converged at their targets, the intensities of the
targets are read using sensors located on one or more particles and the target of
particular interest is identified.
(vi) Now all the particles need to converge at the desired target and the steps (iii) to
(viii) of the general PSO case are repeated.

The logic employed in this part is basically the same as the single target case, only
executed twice. First the targets are isolated within a neighbourhood. Therefore within
this domain, it essentially becomes a single target case with half the number of
particles. Similarly the same case exists for the other target. After the particles
converge at each of the targets, the intensity of each target is read using sensors that
will be mounted on the particles in a practical situation. The desired target is decided
on the target having the greater sensor outcome/intensity. The particles at the other
targets need to move towards the desired target

Electronics & Communication, SIT, Tumkur-3 Page 14


SWARM INTELLIGENCE 2010-2011

3.2 Analysis

Figure 3.3: Plot for the number of particles vls the number of iterations with w; = 0.6.

Figure 3.4: Plot for the number of particles vls the number of iterations with wi = 0.8.

The MATLAB code was executed by varying various parameters. These include the
inertial weight, cl and c2, the number of particles and the number of iterations. The
graphs presented below show the results when the code was executed for two different
values of the inertial weight. The program was also executed a number of times with
different number of particles. The graph in Figure 3.3 shows that initially increasing
the number of particles in the swarm reduces the number of iterations required to
reach the target. But, after increasing the number of particles beyond a certain value,
the number of iterations required started increasing. This follows the law of

Electronics & Communication, SIT, Tumkur-3 Page 15


SWARM INTELLIGENCE 2010-2011

diminishing returns. The same is true for the case where wi = 0.8. But it can be seen
that the lowest point in the graph for wi = 0.8 is around 35 to 30 particles in a swarm
while for wi = 0.6 is 20 to 25.From the results it can also be seen that by reducing the
value of wi, the number of iterations required to achieve the goal reduces by a large
number. It can also be seen that the two graphs look different. When the value of wi is
0.8 the number of iterations required to reach the goal does not follow the same path
as that for wi = 0.6. By increasing the value of wi,it was seen that the number of
iterations required increased. But when w, was taken greater than 1, the particles
reached the extreme ends of the problem space and the velocities saturated at the
maximum velocity defined. This happens because for a higher wi, the velocity is
higher. Therefore, the distance covered by the particle between iterations is greater.
This results in the particle overshooting the target location and hence would need
more number of iterations to get pulled back towards the target. follows the law of
diminishing returns. The same is true for the case where wi = 0.8. But it can be seen
that the lowest point in the graph for wi = 0.8 is around 35 to 30 particles in a swarm
while for wi = 0.6 is 20 to 25. Also it was Seen that by increasing the number of
iterations for a fixed number of particles, sometimes of the particles got randomly
saturated. The particles reached the maximum velocity and the extreme edges of the
space.

Table 3.1 :number of iterations

Electronics & Communication, SIT, Tumkur-3 Page 16


SWARM INTELLIGENCE 2010-2011

The system can be made adaptive by varying the weights according to the positions of
the particles. A higher value of wi means that the dependence of the new velocity on
the previous velocity is greater. Therefore, to make the program adaptive the value of
wi can be defined greater than one initially. As the particles start approaching the
target the velocity needs to decrease. The two terms that depend on the difference of
the particles position with the ‘Pbest’ and ‘Gbest’ positions will become smaller.
Therefore, defining a new wi which is smaller than one, starts reducing the velocity
which implies less exploration and more exploitation.

Table 3.2: Effect of cl and c2 on the number of


Iterations with swarm size= 10 and wi=0.6

The acceleration constants c1 and c2 also have an effect on the velocity of the articles.
Constant c1 corresponds to the ‘Pbest’ of the particle and c2 corresponds to the
‘Gbest’ term of the velocity equation. The simulation was tested for different values
of c1 and c2 other than 2. Table 3.1 shows the results with values c1, and c2 taken to
be 2. Table3.2 shows the effect on the number of iterations for different values of c1
and c2. When these values were taken greater than 2 the particles saturated. But it was
observed that by reducing these values faster convergence was achieved for the same
number of particles and value of wj. This result shows the importance of the ‘Pbest’
and ‘Gbest’ terms on the speed of the system. Also another interesting observation
was that when the values taken were c1=0.5 and c2 =2, the convergence was faster
for a given wi and number of particles. Since c2 corresponds to the ‘Gbest’ term, it
can be seen that the ‘Gbest’ term plays a greater role in the convergence of the
particles. The results of the two target case are very similar to the single target case.
Here there are two loops in the program. Therefore the total number of iterations is

Electronics & Communication, SIT, Tumkur-3 Page 17


SWARM INTELLIGENCE 2010-2011

the sum total of the iterations required to identify the target and the number of
iterations required for converging at the desired target

Table 3.3: Effect of cl and c2 on the number of


iterations with swarm size= 10

The values of the number of iterations vary slightly over the number of times the
program is run. This depends on where in the space the particles get initialized. Since
the initialization is random, this number varies. Therefore the values above are
approximates. The number of iterations required to achieve this task is relatively less
than the single target case. This is because once the target interest has been identified;
a few particles are already at that particular location hence only the others need to be
moved. This can be done by either simply supplying the new location information or
forcing the particles to move there or by implementing particle swarm algorithm
again. The results shown above are obtained by employing particle swarm again. The
number of iterations will decrease further if instead of the PSO algorithm the particles
are moved directly to the new location.

Electronics & Communication, SIT, Tumkur-3 Page 18


SWARM INTELLIGENCE 2010-2011

Conclusion and Future Work

It has been shown that Swarm Intelligence’s optimization algorithms can be


successfully implemented for different conditions of optimization problem Swarm
Intelligence-based techniques has been used in a number of applications.

It has been shown that PSO can be successfully implemented for a single target with a
known location. This has also shown results for a two target case. The same algorithm
can be extended to multiple targets with known locations and then further to multiple
targets with unknown locations. The parameter that describes the target can be for
example, the intensity of a light source, the radiation of a source: etc. This parameter
is important because it helps in identifying the target of particular interest. The results
obtained have shown that PSO has potential for application in unmanned vehicles to
be used in hazardous and dangerous environments.

Future work includes the study on Ultraswarm technique. The general concept of
combining swarm intelligence and wireless cluster computing is called UltraSwarm.
Although the genesis of the idea occurred in the context of flocking systems, the basic
philosophy could also apply to swarm intelligence systems based on social insect
behaviour.

BIBLIOGRAPHY:
Electronics & Communication, SIT, Tumkur-3 Page 19
SWARM INTELLIGENCE 2010-2011

[l] Chin Aik Koay, Srinivisan, D., “Particle swarm optimization-based approach for
generator maintenance scheduling”, Proceedings of the 2003 IEEE Swarm
Intelligence Symposium, April 24-26, 2003, Page(s): 167 -173.

[2] Sheetal Doctor , Ganesh K. Venayagamoorthy,”Unmanned Vehicle Navigation


Using Swarm Intelligence “ 0-7803-8243-9/041$17.000 2004 IEEE

[3] A. Abraham et al.: Swarm Intelligence: Foundations, Perspectives and


applications, Studies in Computational Intelligence (SCI) 26, 3–25 (2006)@Springer-
Verlag Berlin Heidelberg 2006

Excerpts

http://www.swarmintelligence.org/

http://en.wikipedia.org/wiki/Ant_colony_optimization

http://en.wikipedia.org/wiki/Swarm_intelligence

http://en.wikipedia.org/wiki/Swarm_intelligence

Electronics & Communication, SIT, Tumkur-3 Page 20

You might also like