You are on page 1of 7

International Journal of Electronics and JOURNAL Communication Engineering & Technology (IJECET), ISSN 0976 INTERNATIONAL OF ELECTRONICS AND

D 6464(Print), ISSN 0976 6472(Online) Volume 4, Issue 5, September October (2013), IAEME

COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET)

ISSN 0976 6464(Print) ISSN 0976 6472(Online) Volume 4, Issue 5, September October, 2013, pp. 218-224 IAEME: www.iaeme.com/ijecet.asp Journal Impact Factor (2013): 5.8896 (Calculated by GISI) www.jifactor.com

IJECET
IAEME

SELF ACCELERATED SMART PARTICLE SWARM OPTIMIZATION FOR NON LINEAR PROGRAMMING PROBLEMS
Anuradha L. Borkar1
1

Electronics and Telecommunication Dept. M S S S College of Engineering and Technology, Jalna, Maharashtra, India

ABSTRACT This paper presents Self Accelerated Smart Particle Swarm Optimization (SASPSO) for nonlinear programming problems (NLP). In SASPSO, the positions of particle are updated by pbest (Personal or local best) and gbest (Global best). The main advantages of SASPSO are that it doesnt require velocity equation. In addition, it does not require any additional parameter like acceleration coefficients and inertia weight as in case other PSO algorithms. Momentum factor is introduced in SASPSO which can prevent particles from out of defined region without checking the validity of positions at every iterations result in saving of computational cost. During the initial stages of the experimentation, the step size will be large and during the final stage of the experimentation, the step size is reduced to smaller value. The SASPSO is tested on global optimization problems such as Nonlinear Programming Problems (NLP). The results are compared with the Genetic Algorithm (GA). The results of SASPSO are good in terms of are accuracy of optimal solution and generations as compared GA. It also gives large number of possible optimal solutions as compared GA. Keywords: gbest(Global best), Nonlinear programming problems (NLP), pbest (Personal or local best), Self Accelerated Smart Particle Swarm Optimization (SASPSO). I. INTRODUCTION

Most of the real life problems occurring in the field of science and engineering may be modeled as Nonlinear programming (NLP), unimodal or multimodal optimization problems. Nonlinear programming is the process of solving a system of equalities and inequalities over a set of unknown real variables, along with an objective function to be maximized or minimized, where some of the constraints or the objective functions are nonlinear. No general algorithms exist for solving nonlinear programming problems. However for problems with certain suitable structures, efficient algorithms have been developed. It is possible to convert the given NLP into one in which these structures become visible.
218

International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 6464(Print), ISSN 0976 6472(Online) Volume 4, Issue 5, September October (2013), IAEME

Nonlinear programming problems, unimodal or multimodal problems are generally considered more difficult to solve as there exists several local and global optima. Effective optimization of nonlinear programming problems, multi-dimensional and multimodal function with faster convergence and good quality of solution is a challenging task. The high computational cost and demands for improving accuracy of global optima of nonlinear programming problems and multimodal functions have forced the researchers to develop efficient optimization techniques in terms of new, modified or hybrid soft computing techniques. John Holland conceived Genetic Algorithm (GA) in the mid 1970s [1]. GA is inspired by the principles of genetic evolution and mimics the reproduction behavior observed in biological populations. GA suffers from premature convergence when an individual that is fit than others at early stages dominates on the reproduction process leading to a local optimum convergence rather than a more thorough search that could have lead to a global optimum [1]. GA has been extensively applied to solve complex design optimization problems due its capability to handle constrain functions without requiring gradient information [1]. But the limitation of getting trapped in local minima and three step procedures of GA [1-2] such as selection, crossover and mutation increase the computational time and thus forced the researchers to search for more efficient optimization techniques. Particle Swarm Optimization (PSO) is a population-based optimization method developed by Eberhart and Kennedy in 1995. This method is inspired by social behavior of bird flocking or fish schooling. It can efficiently handle problems like nonlinear, unimodal and multimodal function optimizations [3]. . Compared to GA, PSO is easy to implement and converge faster with less memory requirement [4]. However, unlike GA, PSO has no evolution operators such as crossover and mutation. The new variants of PSO are proposed for faster convergence and better quality of optimum solution like Supervisor-Student Model in Particle Swarm Optimization (SSM-PSO) [5], Linear Decreasing Weight Particle Swarm Optimization (LDW-PSO) [6], Gregarious Particle Swarm Optimization (GPSO) [7], Global and Local Best Particle Swarm Optimization (GLBestPSO)[8] and Emotional Particle Swarm Optimization (EPSO)[9]. The author proposed Self Accelerated Smart Particle Swarm Optimization (SASPSO) for nonlinear programming problems (NLP). The rest of the paper is organized in four fold. The section II depicts the review of original Particle Swarm Optimization (PSO). The section III depicts the proposed method. In section IV, experimental results on nonlinear programming problems (NLP).by proposed method and other published techniques are presented. Section V comprises of conclusion. II REVIEW OF PARTICLE SWARM OPTIMIZATION

The original framework of PSO is designed by Kennedy and Eberhart in 1995. There fore, it is known as standard PSO [3]. PSO follows the optimization process by means of personal or local best (pi), global best (pg), particle position or displacement (X) and particle velocity (V). For each particle, at the current time step, a record is kept for the position, velocity, and the best position found in the search space. Each particle memorizes its previous velocity and the previous best position and uses them in its movements [3]. The velocities (V) of the particles are limited in [Vmin Vmax] D. If V is smaller than Vmin then V is set to Vmin or Xmin. If V greater than Vmax then V is set to Vmax or Xmax. Since the original version of PSO lacks velocity control mechanism, it has a poor ability to search at a fine grain. The two updating fundamental equations in a PSO are velocity and position equations, which are expressed as Eq. (1) and (2).

219

International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 6464(Print), ISSN 0976 6472(Online) Volume 4, Issue 5, September October (2013), IAEME

V id (t + 1) = V id ( t ) + c1 r1 d ( t ) ( p id ( t ) X id ( t )) + c 2 r2 d (t ) ( p gd (t ) X id ( t ))

(1) (2)

X id (t + 1) = X id (t ) +V id (t + 1)
Where, t= Current iteration or generation. i = Particle Number. d= Dimensions. Vid(t) = Velocity of i-th particle for d-dimension at iteration t. Xid (t) = Position of i-th particle for d-dimension at iteration t. c1 and c2= Acceleration constants. r1d (t) and r2d (t) = Random values [0 1] for d- dimension at iteration t. pid (t)= Personal or local best of i-th particle for d-dimension at iteration t. pgd (t) = Global best for d-dimension at iteration t.

The right side of Eq. (1) consists of three parts. The first part of equation is the previous velocity of the particle. The second part is the cognition (self-knowledge) or memory, which represents that the particle is attracted by its own previous best position and moving toward to it. The third part is the social (social knowledge) or cooperation, which represents that the particle is attracted by the best position so far in population and moving towards to it. There are restrictions among these three parts and can be used to determine the major performance of the algorithm. III SELF ACCELERATED SMART PARTICLE SWARM OPTIMIZATION (SASPSO)

The standard PSO suffers from following disadvantages. 1. 2. 3. 4. The swarm may prematurely converge when some poor particles attract the other particles or due to local optima or bad initialization It has problem dependent performance. No single parameter settings exists which can be applied to all problems. It requires tuning of parameters like c1, c2, w, iterations and swarm size Increasing the value of inertia weight w, increases the speed of the particles resulting in more global search and less local search. Decreasing the inertia weight slows down the speed of the particle resulting in more local search and less global search.

So to avoid the disadvantage of standard PSO, the author proposed a Self Accelerated Smart Particle Swarm Optimization (SASPSO). In SASPSO, the positions of particle are updated by pbest (Personal or local best) and gbest (Global best particle positions) as expressed in Eq. (3). The main advantages of SASPSO are that it doesnt require velocity equation. In addition, it does not require any additional parameter like acceleration coefficients and inertia weight as the case in other PSO algorithms. Momentum factor can prevent particles from out of defined region without checking the validity of positions at every iterations result in saving of computational cost. Xi(t+1)= Xi(t)+Mc*rand((Xi(t)- (gbest- pbesit))+( gbest-- pbesit)*rand) Where, t= Current iteration or generation. i = Particle Number. Xi (t) = Position of i-th particle for d-dimension at iteration t.
220

(3)

International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 6464(Print), ISSN 0976 6472(Online) Volume 4, Issue 5, September October (2013), IAEME

r1 and r2 = Random values [0 1] at iteration t. pbesit = Personal or local best of i-th particle at iteration t. Gbest = Global best at iteration t. Mc= Momentum Factor During the initial stages of the experimentation, the step size will be large and thus the positions of particles are away from the global best position. During the final stage of the experimentation, the step size is reduced to smaller value. Momentum factor can prevent particles from out of defined region without checking the validity of positions at every iterations result in saving of computational cost. IV EXPERIMENTAL RESULTS AND DISCUSSIONS

The Four Quadratic Programming Problems [10] as shown in Table 1 are used to validate performance of the Self Accelerated Smart Particle Swarm Optimization. The authors are considered both maximization and minimization functions to test efficiency of the SASPSO. The first and second problems are minimization problem and third and fourth problem is maximization problem. The solution by using Beales method [10] for each Quadratic Programming problem is presented in Table 1. The equation along with the constraints is given to the program. The objective function which is to be minimized or maximized is the fitness function. In SASPSO, population is taken as double vector with size 20. The value of momentum factor is chosen as 0.005 that will results in not escaping the global optima. The stopping criterion of the SASPSO is taken as error value of 0.1. If the error value is equal to or less than 0.1 then the SASPSO is stopped. The number of trials is taken as 10. The number of generations required and solutions for Quadratic Programming problem by using GA and SASPSO is presented in Table 2. For first and second NLP problems SASPSO is required less number of generations as compared GA and the accuracy of solutions by SASPSO which is better than GA. For third and fourth NLP problems SASPSO is required less number of generations as compared GA and the accuracy of solutions by SASPSO is 100 and which is better than GA. But the SASPSO is producing same solution for maximization problem. The Fig 1 shows number of generations required for each NLP problems by GA and SASPSO. As seen from results the SASPSO requires less number of computations as compared to GA improves the quality of optima . The presented method is suitable for optimization of Quadratic Programming problem, unimodal and multimodal functions. Table 1. Quadratic Programming Problems solved Equation With Constraints Beales method Min Z= X1*2+ X2^2 X1=2 SUB: X1+ X2>=4 X2=2 2 X1+ X2>=5 X1, X2 >=0 Z=8 Min Z=183-44 X1-42 X2+8 X1^2-12 X1 X2^2 X1=3.8 SUB: 2 X1+ X2<=10 X2=2.4 X1, X2>=10 Z=19 Max Z= 2 X1+3 X2 X1=2 SUB: X1^2+ X2^2<=20 X2=4 X1 X2<=8 Z=16 X1, X2>=0 Max 2 X1+2 X2-2 X2^2 X1=2 SUB : X1+4X2<=4 X2=0 X1+ X2<=2 Z=4
221

Eq No 1

International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 6464(Print), ISSN 0976 6472(Online) Volume 4, Issue 5, September October (2013), IAEME

Table 2. Comparison of Quadratic Programming Problems solved by using GA and SASPSO


Eq No Solution by GA[10] X1 1.90715 2.15473 2.05038 1 2.14182 2.01181 X2 2.09287 1.84527 1.94965 1.8582 1.98821 Gen. [10] 4 4 4 4 4 Optimize using GA[10] 8.01732 8.04788 8.00519 8.0403 8.00038 Accu racy [10] 100 100 100 100 100 Solution by SASPSO X1 X2 1.95539 2.04873 1.95916 1.95270 1.95688 1.96214 1.95293 1.95116 1.94477 1.95201 3.81998 3.80427 3.79966 3.79831 3.80224 2.36005 2.39102 2.40058 2.40337 2.39492 5 6 4 8 7 19.002 19.00247 19.00061 19.00035 19.0042 100 100 100 100 100 1.95908 3.81169 3.80799 3.81026 3.79655 3.81848 3.82026 3.80526 3.79655 3.80526 2.09653 1.99 1.91904 1.72685 1.9912 3.81583 4.00379 4.039 4.12527 4.00436 5 3 4 5 5 15.64055 15.99137 15.95508 15.82953 15.99548 97.75 99.94 99.71 98.93 99.97 3.82048 2.00526 2.00526 2.00526 2.00526 2.00526 2.00526 2.00526 2.00526 2.00526 1.95734 1.97025 1.86849 1.94342 1.89092 0.04266 0.02975 0.13161 0.05658 0.10902 3 3 3 3 3 3.99635 3.99823 3.9975 3.9936 3.9761 99.9 99.95 99.93 99.93 99.4 2.00526 2.00500 2.00500 2.00500 2.00500 2.00500 2.00500 2.00500 2.00500 2.00500 2.00500 2.05151 2.04818 2.05713 2.05779 2.05772 2.05428 2.05895 2.06040 2.05079 2.36760 2.37445 2.37094 2.39886 2.35490 2.35451 2.3736 2.39886 2.37368 2.35260 4.00526 4.00526 4.00526 4.00526 4.00526 4.00526 4.00526 4.00526 4.00526 4.00526 0.00100 0.00100 0.00100 0.00100 0.00100 0.00100 0.00100 0.00100 0.00100 0.00100 Gen. Optimize using SASPSO 8.00208 8.00470 8.00081 8.06115 8.08449 8.00048 8.02708 8.02144 8.00555 8.00377 19.00165 19.00714 19.00020 19.00018 19.00029 19.00793 19.00008 19.00048 19.00108 19.09185 16.02632 16.02632 16.02632 16.02632 16.02632 16.02632 16.02632 16.02632 16.02632 16.02632 4.01300 4.01300 4.01300 4.01300 4.01300 4.01300 4.01300 4.01300 4.01300 4.01300 Accuracy 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100

3 3 3 3 3 3 3 3 3 3 3 3 3 4 6 7 6 7 5 7 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2

222

International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 6464(Print), ISSN 0976 6472(Online) Volume 4, Issue 5, September October (2013), IAEME

Comparision of Number of Generations required by PSO and SASPSO


Number of Generations 7 6 5 4 3 2 1 0 1 2 3 4 Equation Number

PSO SASPSO

Fig. 1. Number of Generations Required by PSO and SASPSO

V.

CONCLUSION

The author introduced Self Accelerated Smart Particle Swarm Optimization (SASPSO) to optimize difficult nonlinear programming problems (NLP) with fast convergence and better accuracy.. The main advantages of SASPSO are that it doesnt require velocity equation. In addition, it does not require any additional parameter like acceleration coefficients and inertia weight as in case other PSO algorithms. Momentum factor is introduced in SASPSO which can prevent particles from out of defined region without checking the validity of positions at every iterations result in saving of computational cost. Using SASPSO, we get large combinations of values of decision variables satisfying all the given constraints in very short time. The results of SASPSO are good in terms of are accuracy of optimal solution with less number of generations as compared GA. The proposed technique is more efficient for improving the quality of global optima of Nonlinear programming problems (NLP), unimodal and multimodal functions with less computational requirement and better accuracy. VI. [1] [2] [3] [4] [5] REFERENCES Boeringer D.W., Werner D.H., Particle swarm optimization versus genetic algorithms for phased array synthesis, IEEE Transactions Antennas Propagation, 52(3), 2004, 771779. Eberhart R.C., Shi Y.: Comparison between genetic algorithm and particle swarm optimization, In proc. IEEE Int. Conf. Computt., Anchorage, AK, 1998, 611-616. Kennedy J. and Eberhart R.C., Particle Swarm Optimization Proc York: IEEE International conference on neural networks, Springer-Verlag 1995, Piscataway; 4, 1985, 1942-1948. Shi Y.H. And Eberhart R.C., Parameter Selection in Particle Swarm Optimization Annual Conference on Evolutionary Computation, 1999, 101-106. Liu Yu, Zheng Qin, and Xingshi He., Supervisor-Student Model in Particle Swarm Optimization, IEEE Congress on Evolutionary Computation, 2004 (CEC 2004), 1, 2004, 542-547. Shi Y. and Eberhart R.C., A modified particle swarm optimizer, Proceedings of the IEEE Congress on Evolutionary Computation (CEC 1998), Piscataway, NI., 1998, pp.69-73.
223

[6]

International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 6464(Print), ISSN 0976 6472(Online) Volume 4, Issue 5, September October (2013), IAEME

[7] [8]

[9]

[10]

[11]

[12]

[13]

Pasupuleti Srinivas and Roberto Bhattiti, The Gregarious Particle Swarm Optimizer (G-PSO), GECCO 2006, Seattle, Washington, USA, 2006. Arumugam M. Senthil, Rao M. V. C., Chandramohan Aarthi, A new and improved version of particle swarm optimization algorithm with global-local best parameters, Journal of Knowledge and Information System (KAIS), Springer. 16(3), 2008, 324-350 Yang Ge,Rubo Zhang,: An Emotional Particle Swarm Optimization Algorithm, Advances in Natural Computation, Lecture notes in Computer Science, Springer-Verlag, Berlin, Germany, 3612, 2005,.553-561 P.S. Revenkar, Smita Kasar, Abhilasha Mishra, Optimization of Non Linear Programming Problems using non traditional method: Genetic Algorithms, International conference and work shop on emerging trends in technology, ACM newyork, ny, USA, 2010, 1002-2002. Chandramouli.H, Dr. Somashekhar C Desai, K S Jagadeesh and Kashyap D Dhruve, Elephant Swarm Optimization for Wireless Sensor Networks A Cross Layer Mechanism, International Journal of Computer Engineering & Technology (IJCET), Volume 4, Issue 2, 2013, pp. 45 - 60, ISSN Print: 0976 6367, ISSN Online: 0976 6375. R. Arivoli and Dr. I. A. Chidambaram, Multi-Objective Particle Swarm Optimization Based Load-Frequency Control of a Two-Area Power System with Smes Inter Connected using Ac-Dc Tie-Lines, International Journal of Electrical Engineering & Technology (IJEET), Volume 3, Issue 1, 2012, pp. 1 - 20, ISSN Print : 0976-6545, ISSN Online: 0976-6553. A.Sri Rama Chandra Murty and M. Surendra Prasad Babu, Implementation of Particle Swarm Optimization (Pso) Algorithm on Potato Expert System, International Journal of Computer Engineering & Technology (IJCET), Volume 4, Issue 4, 2013, pp. 82 - 90, ISSN Print: 0976 6367, ISSN Online: 0976 6375.

224

You might also like