You are on page 1of 8
Particle Swarm Optimization: Pitfalls and Convergence Aspects ‘Andie Engelbrecht Depart of Computer Soe, Univer of Poa, Sout Aen Contents Introduction Overview of the basic PSO Aspects of PSO Particle Trajectories Problem with PSO Summary What are the origins of PSO? ‘in the work of Reynolds on “boids” [g], # the work of Heppner and Grenander on using a “rooster” as attractor [4] ‘ simplified social model of determining near- est neighbors and velocity matching ‘initial objective: to simulate the grace ful, unpredictable choreography of collision- proof birds in a flock ‘at each iteration, each individual deter- mines its nearest neighbor and replaces its velocity with that ofits neighbor « resulted in synchronous movement of the flock « random adjustments to velocities prevented individuals to settle too quiekly on an un- changing direction adding roosters as attractors: = personal best = neighborhood best = particle swarm optimization Introduction Particle swarm optimization (PSO) © developed by Kennedy and Eberhart [5] « first published in 1995, and « with an exponential increase in the num- ber of publications since then ‘What is PSO? # simple, computationally efficient opti- mization method '* population-based, stochastie search ‘based on a social-psychotogical model of social influence and social learning [6] « individuals follow a very simple behav- for: emulate the success of neighboring individuals ‘emergent behavior: discovery of optimal regions of «high dimensional search space Questions: + Even with so much research done in PSO, and applications of PSO to solve complex real-world problems, do we really unde stand the behavior of PSO? + How can we make sure that we have con- vvergent trajectories? © How can we prevent premature conver gence? Overview of Basic PSO ‘What are the main components? ‘© swarm of particles ‘each particle represents @ candidate so- lution ‘clements of a particle represent paraume- ters to be optimized ‘The search process: ‘Position updates xi(t + 1) = x4(0) + v(t +1) where X;j(0) ~ U@ming, Tmax) # Velocity ~ drives the optimization process step size flects experiential knowledge and so- cially exchanged information Global best (gbest) PSO e uses the star social network ‘velocity update per dimension: v(t 1) = vy +eary(luy(0 - a4(0)] terre OI,(0 ~ 240) # v49(0) = 0 (usually) ‘© c1, eg are positive acceleration coefficients ory(th rast) ~ 00,1) ‘© y(0) is the personal best position ealeu- lated as nit) i FOs(t-+1)) > Fly nee (Rey ape eta) '# ¥(0) i the global best position calculated HE) € {yolt),---.¥n(H SHO) = mint f(yo(t)). ++ flyng())} or H(t) = min{ F(xo(¢)),... FO%ng(t))} where ng is the number of particles in the swarm eemy first used network structures: star and ring topologies —~o-7 iar $68 ‘pareegdte eta Fete 80118 ee a, =a Local best: (Ibest) PSO. ‘uses the ring social network agft +1) = vl + erry (Ooy(d ~ 2400] teara(td4 (8) ~ 29(0) ©; is the neighborhood best, defined as Wlt+l) € (NASG.(t+1)) = min{ fod}, Vx EN} with the neighborhood defined as Ne = Wien Ds Yoong stlyess Vics, Meroe eYang (ODD where ny is the neighborhood size ‘* neighborhoods are based on particle in- dices, not, spatial information ‘* neighborhoods overlap to facilitate infor- tation exchange pees Healer Tedd te Heat pone, best PSO vs Ibest PSO ‘speed of convergence ‘susceptibility to local minima? Geometric illustration er St Wy Pan Ui a oe Aspects of Basic PSO ‘Velocity components + previous velocity, v(t) ~ inertia component memory of previous flight direction ~ prevents particle from drastically chang ing direction « cognitive caraponent, c1r1(¥i 34) ~ quantifies performance relative to past performances ~ memory of previous best position — nostalgia ‘* social component, car9(¥j — X;) ~ quantifies performance relative to neigh bors ~emy Exploration-exploitation tradeoff ‘exploration ~ the ability to explore re- ‘sions of the search space ‘exploitation ~ the ability to concentrate the search around a promising area to refine a candidate solution ‘© c1 v8 cp and the influence on the exploration exploitation tradeoff Velocity clamping: ‘the problem: velocity quickly explodes to large values solution: yea) = [lt D LED < Vows Vass Ug(t+ 1) Vos + controlling the global exploration of par- ticles « problem-dependent # does not confine the positions, only the step sizes ‘problem to be aware of BO Lace ¢ dynamically changing Vimaz when get does not improve over 7 iterations [9] Vas st) i FCHAE) 2 HH 0) Weer Vemestt) otherwise Vest ‘© exponentially decaying Vmaz [3] Vinazlt +1) = (1 (¢/me) Wrnae 3) + dynamically changing inertia weights -w~ NO72,0) = linear decreasing (11] w(t) = (w(0}—w(ne pct ) (me) non-linear decreasing (25) w(t +1) = aw/t) with w(t) = 14 ~ based on relative improvement [1] emtt) — 1 wi(t+1} = 0(0)-+(w(rs)—0(0) Sea ‘where the relative improvement, ms, is estimated as me) = £02 Soai(t)) F9AE)) + Flat) Inertia weight {10} + to control exploration and exploitation ‘controls the momentum ‘velocity update changes to vag(t +1) = wuaglt) + erray()lui(t) — 2y(t)] tonraj(C)lGg(t) — aay()) for w > 1 — velocities increase over time swarm diverges — particles fail to change direction to- ‘wards more promising regions sfor0 4 and x € (0,1, then the swarm is guaranteed to converge x [0.1] ¢ controls exploration- exploitation 0: fast convergence, local exploita- tion I: slow convergence, high degree of exploration « effectively equivalent to inertia weight for specific x: 1) = xeiry and dy = Xeyr2 ‘Synchronous vs asynchronous updates = personal best and neighborhood bests updated separately from position and velocity vectors = slower feedback — better for ghest asynchronous: ~ new best positions updated after each particle position update — itnmediate feedback about best regions of the search space — better for Ibest Acceleration coefiicients (trust parameters): ec =a=0? #0 >0,q=0: ~ particles are independent bill-climbers local search by each particle Particle ‘Trajectories Simplified particle trajectories [13] ‘* no stochastie component ‘single, one-dimensional particle using w + petsonal best and global best are fixed: y=10,9=00 Example trajectories ‘© Convergence to an equilibrium (figure 4) # Cyclic behavior (figure 5) ‘Divergent behavior (figure 6) oc = 0,e > 0: swarm is one stochastic hill-limber sq =e@>0 — particles are attracted towards the av- erage of y; and ¥ een > er more beneficial for unimodal problems ea w> Hoi +)-120 0.9, 1 =e = 20 — violates the convergence eondition ~ for w = 0.9, 61 + 0 < 38 to validate the condition hii —- shone ‘© What is happening here? since 0 < dy +02 <4, ~and 1,73 ~~ U(0,1) ~ prob(qy + ¢2 < 3.8) = 82 = 0.95 Problem with Basic PSO Tt has been proven that particles converge to a stable point [13, 2, 12] oy + bat) o1 + bp Problem: ‘this point is not nevessarily a minimum may prematurely converge to a stable state ‘formal proofs in [13] Potential dangerons property: when x; = yi= Fi « then the velocity update depends only on wi ‘if this condition persists for a number of iterations, wy; +0 ‘* good convergent parameter choices: T= 14=e= 14 ~ validates the convergence condition ‘under stochastic 1 and dp, convergent Dbehavior results when [13] Berit ate, is close to 1.0, where Serit = sup $105d—1 es plt+1) =} 0.5plt) if #failures(t) > pit) otherwise where #successes and #failures re- spectively denote the mimber of eonsect- tive successes and failures, with a failure defined as f(9() < f(V(E+ 1)) ‘© GCPSO is proven to be a local minimizer fa BE Shaun AA Gal ig i fT tn Pte Ne IEEE Coro tenn peta age Oo 12x ae, enn pew ted Or "Reuse rata en aT Sepa tony eae ae at es 2a 142 ah nt A tet rly ve ae Sage a Fee stk ‘Summary Despite its simplicity, PSO has been very sue- cessful However, care has to be taken in the selection ‘of parameter values to ensure convergent trajectories ‘The original PSO as a flaw which may cause it to prematurely converge to an equilibrium which does not represent an optimum References el SSS cite cata Vredestein t hr equate ge Oe ein. on A Rap en ac oom me sarredlinner c tearm nnn ene er men a tte ae es eer Sioerebtamacis lar dae

You might also like