You are on page 1of 18

Seminar Slides

TOPIC: PARTICLE SWARM OPTIMISATION


SUBJECT CODE : CS681

TEAM DETAILS -

1.Sayantan Pal(11500116044 | 182294)


2.Sourav Kumar(11500116045 | 182293)
3.Satnam Kumar(11500116046 | 182292)
4.Saptarshi Kundu(11500116047 | 182291)
Particle Swarm Optimization(PSO)

➢ Evolutionary computational technique based on the movement and intelligence of swarms looking
for the most fertile feeding location
➢ It was developed in 1995 by James Kennedy and Russell Eberhart
➢ Simple algorithm, easy to implement and few parameters to adjust mainly the velocity
➢ A “swarm” is an apparently disorganized collection (population) of moving individuals that
tend to cluster together while each individual seems to be moving in a random direction
➢ It uses a number of agents (particles) that constitute a swarm moving around in the search space
looking for the best solution.
➢ Each particle is treated as a point in a D-dimensional space which adjusts its “flying” according to its
own flying experience as well as the flying experience of other particles
The Basic Idea of Particle in PSO
❏ Each particle is searching for the optimum
❏ Each particle is moving and hence has a velocity.
❏ Each particle remembers the position it was in where it had its best result so far (its personal best or
pbest)
❏ But this would not be much good on its own; particles need help in figuring out where to search.
❏ The particles in the swarm co-operate. They exchange information about what they’ve
discovered in the places they have visited
❏ The co-operation is very simple.
❏ In basic PSO it is like this:
● A particle has a neighbourhood associated with it.
● A particle knows the fitnesses of those in its neighbourhood, and uses the position of the one with
__best fitness.
● This position is simply used to adjust the particle’s velocity
❏ PSO Derived from two concepts:
● The observation of swarming habits of animals such as birds or fish
● The field of evolutionary computation (such as genetic algorithms)
Introduction to the PSO(Origins):
In 1986, Craig Reynolds described this process in 3 simple behaviors:

Separation Alignment Cohesion


avoid crowding local move towards the average move toward the
flockmates heading of local flockmates average position of local
flockmates
Particle Swarm Optimization(Contd...)
❏ PSO is inspired by simulation social behavior from artificial living research.

➢ Related to bird flocking, fish schooling and swarming theory


- steer toward the center
- match neighbors’ velocity
- avoid collisions
❏ PSO algorithm is not only a tool for optimization, but also a tool for representing sociocognition of
human and artificial agents, based on principles of social psychology.
❏ A PSO system combines local search methods with global search methods, attempting to balance
exploration and exploitation.
❏ Population-based search procedure in which individuals called particles change their position (state)
with time.
❏ Particles fly around in a multidimensional search space. During flight, each particle adjusts its
position according to its own experience, and according to the experience of a neighboring particle,
making use of the best position encountered by itself and its neighbor.
Particle Swarm Optimization(Contd...)
➢ Each particle keeps track of its coordinates in the problem space which are associated with the best
solution (fitness) that has achieved so far. This value is called personal best(pbest).
➢ Another best value that is tracked by the PSO is the best value obtained so far by any particle in the
neighbors of the particle. This value is called global best(gbest).
➢ The PSO concept consists of changing the velocity(or acceleration) of each particle toward its pbest
and the gbest position at each time step.
➢ Each particle tries to modify its current position and velocity according to the distance between its
current position and pbest, and the distance between its current position and gbest.
How Particles adjust positions?
Particles Adjust their positions according to a ``Psychosocial compromise’’
between what an individual is comfortable with, and what society reckons
Particle Swarm Optimisation
In each timestep, a particle has to move to a new position. It does this by adjusting
its velocity.

The adjustment is essentially this:

(The current velocity) PLUS

(A weighted random portion in the direction of its personal best) PLUS

(A weighted random portion in the direction of the neighbourhood best.)


Particle Swarm Optimization(Formulas)
Velocity Update Eqn:

Position Update Eqn:


• i is the particle index,old be nth iteration and new be (n+1)th iteration
Pid = indivitual best solution at ith iteration at time ‘t’
Pgd = global best solution at ith iteration at time ‘t’
• w is the inertial coefficient (0.8<= w <=1.2)
(‘w’ keeps the particle moving in the same direction it was originally heading and Lower ‘w’ values
speed up convergence while higher ‘w’ values encourage exploring the search space)
• c1, c2 are acceleration coefficients, (0 ≤ c1,c2 ≤ 2)
(c1 - cognitive component acts as the particle’s memory, causing it to return to its individual best
regions of the search space.It represents how much a particle trusts its own past
experience.Cognitive coefficient c1 usually close to 2.Coefficient limits the size of the step the particle
takes toward its individual best pid)
(c2 - social component causes the particle to move to the best regions the swarm has found so far.It
represents how much a particle trusts the swarm.Social coefficient c2 usually close to 2 .Coefficient
limits the size of the step the particle takes toward the global best p gd)
• rand1, rand2 are random values ( 0 ≤ rand1,rand2 ≤ 1 )
regenerated every velocity update
Particle Swarm Optimization(Process)
Canonical PSO Algorithm

1. Initialize population in hyperspace


2. Evaluate fitness of each individual particles
3. Update individual and global bests.
4. Update/Modify velocities and position based on previous best and global (or
neighborhood) best positions
5. Terminate on some condition or return to step 2

These steps are repeated until some stopping condition is met.


Particle Swarm Optimization(Algorithm)
1. For each particle{
Initialize loation and velocity of each particle with feasible random number
}END
2. For each particle evaluate objective function for each particle
3. Do For each particle{ //Update Best Solution
2.a) Calculate the fitness value
2.b) If the fitness value is better than the best fitness value (pbest) in history
Set current value as the new pbest
}End
4. Update Best Global Solution-Choose the particle with the best fitness value of all the particles as
the gbest.
5. For each particle{
3.a) Update Velocity-Calculate particle velocity according to velocity update equation

3.b) Compute new Location-Update particle position according to position update equation
}End

(While maximum iterations or minimum error criteria is not attained) or Until finished
Parameters used for PSO algorithm:

• Number of particles: 8
• Inertia coefficient ( w): .75
• Cognitive coefficient( c 1): 1.8
• Social coefficient( c 2): 2
• Number of iterations: 10 (or no improvement for 4 consecutive iterations)

Analysis of Results
• Results are still preliminary, but encouraging
• Due to randomized aspects of PSO algorithm, the optimization process would
need to be run several times to determine if results are consistent
• Alternative PSO parameters can be attempted, and their effectiveness measured
Conclusions and Future Work
Conclusions:

• Significant speedup using PSO over exhaustive search

• Additional testing needed

Future Work:

• Other PSO variants can be tried

• Need to find optimal parameters for PSO itself


THANK YOU

You might also like