This action might not be possible to undo. Are you sure you want to continue?

**Jagdish Chand Bansal
**

Mathematics Group Birla Institute of Technology and Science, Pilani Email: jcbansal@gmail.com, bits-pilani.ac.in

Overview

Introduction An Example Some Developments Research Issues

2

Optimization Methods

Deterministic and Probabilistic

3

4 .can only be applied to restricted class of problems. Often too time consuming or sometimes unable to solve real world problems. Demerits Not Robust.Deterministic Method Merits Give exact solutions Do not use any stochastic technique Rely on the thorough search of the feasible domain.

continuous or explicitly defined • Use the stochastic or probabilistic approach i. 5 . random approach Demerits Converges to the global optima probabilistically Some times get stuck at local optima.Probabilistic Method Merits • Applicable to wider set of problems i.e.e. function need not be convex.

Some Existing Probabilistic Methods Simulated Annealing (SA) Random Search Technique (RST) Genetic Algorithm (GA) Memetic Algorithm (MA) Ant Colony Optimization (ACO) Differential Evolution (DE) Particle Swarm Optimization (PSO) 6 .

Why PSO for Optimization ? Continuous Optimization Problem Non-differentiable. Non-Convex Highly nonlinear Many local-optima Discrete Optimization Problem NP-Complete problems: Nobody has found so far any good algorithm for any problem in this class Search speed 7 .

Particle Swarm Optimization Inspiration Artificial Life The term artificial life is used to describe research into human made systems that possess some of the essential properties of life. A-life includes two folded research: A-life studies how computational techniques can help studying biological phenomena A-life studies how biological techniques can help out with computational problem 8 .

individual members of the school can profit from the discoveries and previous experience of all other members of the school during the search for food “. 9 . (a sociobiologist E. O.. About fish schooling: “In theory at least. Wilson) This is the basic concept behind PSO.Inspiration cont. Based on bird flocking or fish schooling and swarming theory of A-Life.

James Kennedy (Right) Prof. Russel Eberhart (Left) 10 .Inventors Developed in 1995 by Prof.

Though the PSO algorithm has been shown to perform well. the population is called swarm and the individuals are called particles. 11 .PSO uses a population of individuals. to search feasible region of the function space. researchers have not been able to explain fully how it works yet. In this context.

12 .Update Equations Each particle tries to modify its current position and velocity according to the distance between its current position and pbest. and the distance between its current position and gbest.

This is the social component where individuals compare themselves to others in their group. can be used to change the weighting between personal and population experience v = v + c1r1 ( pbest − current) + c2 r2 ( gbest − current) This is the cognitive component which draws individuals back to their previous best situations. Velocity Update Equation (Rate of Change in Particle’s Position) current = current + v Position Update Equation 13 . to stop the swarm converging too quickly Current Velocity Acceleration factors.1).Updated Velocity rand (0.

PSO Parameters 1. For most of the problems 10 particles are large enough to get good results. Dimension of particles : It is determined by the problem to be optimized. 3. 2. 14 . Range of particles : It is also determined by the problem to be optimized. The number of particles : 20 – 40 particles. we can specify different ranges for different dimension of particles.

g. But usually c1 equals to c2 and ranges from [0. we set the range of the particle as the Vmax. One another approach is Vmax= ⎣(UpBound – LoBound)/5⎦ 5. X belongs [-10. 4]. 10]. other settings were also used in different papers. Vmax : This is done to help keep the swarm under control. then Vmax = 20. . The stopping criteria : The maximum number of iterations the PSO execute and the minimum error requirement. Learning/Acceleration factors : c1 and c2 usually equal to 2.4. However. e. 6.

2. pbest and velocity Move each particle to a new position. 5. 4. 16 . 3. and repeat until convergence or a stopping condition is satisfied.Basic Flow of PSO 1. Go to step 2. Initialize the swarm from the solution space Evaluate fitness of individual particles Modify gbest.

An Example Understanding of Step by step Procedure of PSO 17 .

lbest PSO .Two Versions of PSO gbest PSO .local version is a little bit slower but not easy to be trapped into local optimum.global version is faster but might converge to local optimum for some problems. One can use global version to get quick result and use local version to refine the search 18 .

BINARY PSO This version has attracted much lesser attention as compared to PSO Particle position is not a real value. but either 0 or 1 Velocity represents the probability of a bit to take the value 0 or 1 not the rate of change in particle’s position as in PSO for continuous optimization 19 .

4 0.8 0.2 0 0 2 4 6 20 .6 0.BINARY PSO The particle’s position in a dimension is randomly generated using sigmoid function 1 1 sigm( x) = 1 + exp(− x) -6 -4 -2 0.

Velocity and Position Update vid = vid + c1r1 ( pid − xid ) + c 2 r2 ( p gd − xid ) ⎧1 if rand () < sigm (vid ) xid = ⎨ ⎩0 otherwise 21 .

No Free Lunch Theorem • In a controversial paper in 1997 (available at AUC library). Wolpert and Macready proved that “averaged over all possible problems or cost functions. the performance of all search algorithms is exactly the same” • No algorithm is better on average than blind guessing 22 .

23 .Important Developments Almost all modifications vary in some way the velocity update equation.

A Brief Review PSO-W : PSO-C : FIPSO : HPSOM : MeanPSO qPSO : With Inertia Weight With Constriction Factor Fully Informed PSO Hybrid PSO with Mutation : Mean PSO Quadratic approximation PSO 24 .

which enhances the exploration ability of particles 25 . Then the iterative expression becomes: v = w * v + c1r1 ( pbest − current) + c2 r2 ( gbest − current) current = current + v w represents the inertia weight.Inertia Weight Shi and Eberhart introduced the inertia weight w in the algorithm (PSO-W).

Performance can suffer if Vmax is inappropriately set. 26 .greater global search ability Smaller w . Larger w . it is possible for the magnitude of the velocities to become very large. For controlling the growth of velocities a dynamically adjusted or constant inertia weight were introduced.greater local search ability.Why Inertia Weight When using PSO.

Constriction Factor Clerc and Kennedy proposed that the constriction factor is effective for the algorithm to converge (PSO-C) v = χ * (v + c1r1 ( pbest − current) + c2 r2 ( gbest − current)) current = current + v χ= 2 2 − φ − φ (φ − 4 ) φ = c1 + c 2 > 4 27 .

⎡ ⎤ v = χ * ⎢v + ∑ ci ri {p (i ) − current (i )}⎥ ⎣ i∈N i ⎦ 28 .Fully Informed PSO A particle is attracted by every other particle in its neighborhood.

the velocity in next iteration will be smaller. This phenomenon is known as stagnation 29 . Then the particle will be trapped in this area which leads to premature convergence. The velocity is only related with inertia weight and constriction factor If the current position of a particle is identical with the global best position and if the current velocity is a small value. but easily becomes premature in the local optima area.Stagnation v = w * v + c1r1 ( pbest − current ) + c2 r2 ( gbest − current ) v = χ * v + c1r1 ( pbest − current) + c2 r2 ( gbest − current) PSO algorithm performs well in the early stage.

HPSOM has the potential to escape from the local optimum and search in a new position. 5 mut ( x id ) = x id − ∆ x . 5 ∆x is randomly obtained from [ 0 . 1 × (max range ( d ) − min range ( d ))] This mutation operation is governed by a constant called probability of mutation 30 .Hybrid Particle Swarm Optimizer with Mutation (HPSOM). rand () > 0 . The mutation scheme randomly chooses a particle and then move to a different position in search area. rand () < 0 . The operation shows as follows: mut ( x id ) = x id + ∆ x . 0 .

47 0 current pbest gbest Solution PSO 31 .MeanPSO gbest − pbest 1.47 MeanPSO gbest + pbest 1 .

32 .

5 * ⎜ ⎟ ⎜ ( R − R ) f ( R ) + (R − R ) f (R ) + (R − R ) f (R ) ⎟ 3 1 3 1 2 1 2 3 ⎠ ⎝ 2 ( ) ( ) Where f(Ri) is the objective function value at Ri .qPSO:Quadratic Approximation (QA) R1 Particle with best fitness value Randomly chosen distinct particles R2 and R3 2 2 ⎛ ( R2 − R32 ) f ( R1 ) + R32 − R12 f (R2 ) + R12 − R2 f (R3 ) ⎞ R * = 0.. The calculations are to be done component wise to obtain R* . 2. 33 . and 3. for i=1.

The Process of Hybridization Figure 4.1: Transition from ith iteration to i+1th iteration s1 s2 sp s'1 s'2 s'p PSO PSO Particle Index qPSO sp+1 sp+2 sm s'p+1 s'p+2 s'm qPSO QA QA ith iteration i+1th iteration The percentage of swarm which is to be updated by QA is called Coefficient of Hybridization (CH) 34 .

R2 and R3 Position Update using QA No Position Update using PSO Evaluate Objective Function Value of all Particles and Determine GBEST Report Best Particle 35 End .Start Random Swarm ITER =0 Evaluate Objective Function Value of all Particles and Determine GBEST Flowchart of qPSO Process Stopping Criterion Satisfied? No ITER =ITER + 1 Split Swarm S into subswarms S1 and S2 For S1 Determine pbest and gbest (= GBEST) For S2 Yes Velocity Update Is it possible to Determine R1. R2 and R3 such that atleast two of them are distinct? Yes Determine R1 (= GBEST).

Research Issues Hybridization Parallel Implementation New Variants modification in Velocity Update Equation Introduce some new operators in PSO Discrete Particle Swarm Optimization Interaction with biological intelligence Convergence Analysis 36 .

Some Unsolved Issues • Convergence analysis. • Cryptanalysis. • Combination of various PSO techniques to deal with complex problems. • Dealing with discrete variables. • Interaction with biological intelligence. 37 .

Sign up to vote on this title

UsefulNot useful- Solution of Firefly Algorithm for the Economic Thermal Power Dispatch With Emission Constraint in Various Generation Plantsby Giovanni Eliezer
- An Improved Multimodal PSO Method Based on Electrostatic Interaction Using N-Nearest-Neighbor Local Searchby Adam Hansen
- 11.14010202by OmarHassanElzaafarany
- A Multi Swarm Particle Filter for Mobile Robot Localization_2010by Mehmet Akif Gunes

- Particle Swarm Optimization tecniche
- PSO AndryPinto InesDomingues LuisRocha HugoAlves SusanaCruz
- psoupfc
- [IJCST-V3I4P18]
- ijcsa030304
- -ANew Discrete Binary Particle Swarm Optimization Based on Learning Automata -IEEE_01383550
- A Virtual Tool for Minimum Cost Design of a Wind Turbine Tower with Ring Stiffeners
- Optimal Location of TCSC
- Solution of Firefly Algorithm for the Economic Thermal Power Dispatch With Emission Constraint in Various Generation Plants
- An Improved Multimodal PSO Method Based on Electrostatic Interaction Using N-Nearest-Neighbor Local Search
- 11.14010202
- A Multi Swarm Particle Filter for Mobile Robot Localization_2010
- 13a Reminiscent Study of Nature Inspired Computation Copyright Ijaet
- 42.SPEE10088
- Paper5.pdf
- Computer Aided Design Syllabus
- 01555-01
- M.C.A.(Sem - III) Operation Research.pdf
- Syllabus 2011 Ie416 Pmo Ie5110 Pdi Notes-2
- Kameshki_2001_Optimum Design of Nonlinear Steel Frames With Semi-rigid Connections
- 563187
- Simulation Optimization Using Balanced Explorative and Exploitative Search
- WR- Systems Planning & Mmt - 20_index
- Design of M-channel pseudo near perfect reconstruction QMF bank for image compression
- ###########Heuristic for a New Multiobjective Scheduling Pro
- FLOWSHEET OPTIMIZATION
- Lecture1 Introduction to Optimization
- Om0010 Operations Management
- Chapter 01
- Operation Research
- PSO Tutorial