You are on page 1of 7

BASICS OF PARTICLE SWARM OPTIMIZATION (PSO)

Dr.K.S.Amirthagadeswaran
Professor of Mechanical Engineering, Government College of Technology, Coimbatore-13

1. OVERVIEW OF PSO Particle Swarm Optimization (PSO) is a population based stochastic optimization technique developed by Dr.Eberhart and Dr. Kennedy in 1995, inspired by the social behavior of bird flocking or fish schooling. PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating generations. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. 2. THE BASIC PSO ALGORITHM The various terms used in the PSO algorithm are given below. X k i = local position of particle F k i = local fitness value of the particle F best i = local best fitness value in the swarm P k i = local best position of a particle for local best fitness value F best g = global best fitness value in swarm P k g = global best position of particle having global best fitness value. C1= self-confidence factor. C2= swarm confidence factor. I = number of iterations. P = total number of iterations

Step 1. Initialize swarm size (a) Set constants c1 and c2. (b) Randomly initialize particle position. (c) Randomly initialize particle velocity. (d) Set counter to 1. Step 2. Optimize (a) Evaluate fitness value F k i at X k i (b) If F k i < F best i then F best i = F k i, P k i = X k i (c) If F k i < F best g then F best g = F k g, P k g = X k i (d) If stopping condition is satisfied go to 3 (e) Update particle velocity V k+1 i and position vector x k+1 i (f) Increment i. If i < p then increment counter, i = 1 (g) Go to 2(a) Step 3. Report results and terminate 3. BASIC STEPS IN PSO The basic idea behind the PSO algorithm is the social behavior of flocking birds. Birds, when searching for their food in the search area, move as a swarm. Initially one of the birds finds the food and travels towards it. The other birds follow the first bird one after the other and finally all the birds in the swarm reach the food. Technically, these birds are called particles within a search space moving towards the destination with a specific velocity. Similar to the flocking birds these particles move one after the other

towards the best optimum solution in the search space. Finally when the optimum solution is reached, the iteration ends. The number of particles present within the search space is called swarm size. For a problem with three control factors and one response, the minimum swarm size should be 6. Here in this project, the swarm size is taken to be 10 for the purpose of safety. There are three important steps in each of the iterations. They are listed below. 1. Initial position of the particles. 2. Updated velocity of the particles. 3. Updated position of the particles. 3.1 Initial position and velocity The initial position and velocity of the particles within the search space is found out by using the formulas given below.

X0=Xmin+rand(Xmax-Xmin)
Where, x0 Initial position xmin Minimum value of a factor xmax Maximum value of a factor rand Random variable, whose value vary randomly between 0 and 1. Thus the algorithm is initialized with a set of particles randomly distributed within the search space. Random numbers are generated in order to maintain required swarm size (say 10). The ten particles are introduced into the search space with randomly varying positions. Here the initial velocities of the particles are assumed to be zero.

3.2. Updated velocity The velocity of the various particles traveling from their initial positions (x k) towards an updated position (xk+1) is given by the updated velocity. The updated velocity of the various particles is given by the expression below.

Vk+1=Vk+C1*rand(Pki-Xki)+ C2*rand(Pkg-Xki)
Where, Vk Current velocity C1 Self confidence factor, range (1.5 to 2) C2 Swarm confidence factor, range (2 to 2.5) C1*rand(Pki-Xki) - Particle memory influence C2*rand(Pkg-Xki) - Swarm influence. 3.3. Updated position The updated position reached by the particle with the updated velocity calculated above is given by the expression below. This updated position of the particle is assumed as the initial position of the particle for the next iteration. Then the above parameters are found and the iteration continues till the convergence criteria are met. The convergence criteria may be the number of iterations or variations in values of the objective function in adjacent iterations. Xk+1=Xk+ Vk+1 This updating in position takes into account the effect of particle best and global best. The proportion of the effect of the particle best and global best are decided by the confidence factors. The confidence factors may be applied with regard to the relative importance of the best values. In addition to the confidence factors a momentum/inertia factor is also applied in certain version of the algorithm.

The various velocity and position updates in the Particle Swarm Optimization algorithm are shown in figure 1.

Figure 1-Position and Velocity updates in PSO 4. PSO PARAMETERS 4.1. Particle Particle refers to a point in the design space that changes its position from one move to another based on velocity updates. 4.2. Swarm Size For the moment the size of swarm is fixed once for all. More the particles, faster the search will be in terms of the number of iterations. But this iteration count is not really a relevant criterion. For iteration, the number of evolutions is equal to the number of particles. Therefore the total number of evolutions needed to find a solution is to be reduced, the size of the swarm is decreased. Experimenters proposed sizes of about 10 to 30 particles which indeed, proved entirely sufficient to solve almost all classic test problems. 4.3. Initialization Initialization simply consists of initially randomly placing the particles according to a uniform distribution in this search space. Here particles with velocity vector which is applied to a position, will give another position.

4.4. Fitness Value All particles have fitness values which are evaluated by the fitness function to be optimized, and have velocities which direct the flying of the particles. 4.5. Global Best: A point or solution x is said to be a global point, if there exists no point in the entire search area which is better than the point x 4.6. Particle Best: A point or solution x is said to be a local point, if there exists no point visited already by the particle which is better than the point x The flow chart of the Particle Swarm Optimization algorithm is appended at the end. 5. Certain remarks: Particle swarm optimization is a simple algorithm which can be applied to a variety of problems. It shares the features of genetic algorithm and evolutionary programming methods. This method has less number of parameters which can be easily fine-tuned. The algorithm can be easily implemented in any of the programing languages.

Evaluate and compare the values

A1.Flow chart of Particle Swarm Optimization algorithm

You might also like