A Comparison of Constraint-Handling Methods for the Application of Particle .

Swarm Optimization to Constrained Nonlinear Optimization Problems
. .
, .

.Genevieve Coath and Saman K. Halgamuge Mechatronics and Manufacturing Research Grolip Department of Mechanical and Manufacturing Engineering University of Melhourne . . ,[gcoath. sam]@mame.mu.oz.au
I

, A j

.

.

.

. .

Abstract- This paper. presenka comparison of two constraint-handling methods used in the application of particle swarm optimization (PSO) to constrained nonlinear optimization problems (CNOPs). A brief review of constraint-handling techniques for evolutionary algorithms (EAs) is given, followed by a direct comparison of two existing methods of enforcing constraints using PSO. The two methods considered are the application of non-stationary multi-stage penalty functions and the preservation of feasible solutions.'Fivebenchmark functions are used for the comparison. and the results are examined to'assess'the performance of eacti,method in terms of accuracy and rate of convergenfe. Conclusions are drawn and suggestions for the applicability of each method to real-world CNOPs are given:.
.*
S .

EAs such as GAS. They concluded that in most cases PSO was ahle to. find better solutions than, the EAs considered. Hu and Eberhart 121, [41 implemented a PSO strategy of tracking only feasible solutions, p i t h excellent results when compared to GAS. This paper will attempt to provide an assessment of these 2 'constraint-handling methods, as to date there has been no direci comparison of their application using PSO. It should be noted that the literature is generally in agreement that in order for EAs, including PSO. to be successfully applied to real-world problems in this area. they must first prove their ability to solve highly complex CNOPs.
I'

* .

1.1 Constrained Nonlinear Optimization Problems

, <

,

.

, The optimizatibn of constrained functiqns, is a n area of par. * ,
, : . I

1 Introduction

..

.

.

.,

ticular importance in applications of engineering. mathematics and economics. among others. Constraints can he hard or soft in nature:rrepr&enting physical limits of systems or preferred operating ranges. Hence, the rigidity with which these constraints are enforced should reflect the nature of the constraints themselves.. Stochastic methods of solving constrained nonlinear optimization problems (CNOPs) are becoming popular due io the success of evolutionary algorithms (EAs) in solving unconstrained optimization problems. EAs zomprise a family of computational techniques which. in general. simulate the evolutionary process [I]. Within this family. techniques may utilise population-based methods which rely on stochastic variation and selection. When'applying EA techniques to CNOPs, constraints can he effectively removed via penalty functions. and the problem can then he tackled as an unconstrained optimization task. Nonlinear optimization represents difficult challenges, as noted in [2]. due to its incompatibility with general deterministic solution methods. Among many EAs investigated for solving CNOPs, particle swarm optimization (PSO) has been given considerable attention in the literature due to its speed and efficiency as an optimization technique. For real-timc CNOPs, which are common in many facets of engineering for example. genetic algorithms (GAS) may he inefficient and slow, and thus PSO may provide a viable alternative. Recently, PSO has been applied to CNOPs with very encouraging results. Parsopoulos et a1 131 compared the ability of PSO to solve CNOPs with the use of non-stationary multi-stage penalty functions'to that of other

In general. a constrained nonlinear problem can he defined as follows [ 5 ] : Minimize or maximize f ( X ) . subject to the linear and nonlinear constraints: g~(.r)>o.i=l:..:~m~ where z E R" g < ( r ) . r 0 : ; = ml i ....r n . 1. and the bound;: .zr. 5 i.5 s . . Where f and 91: ...:y, are continuously differentiable functions. Nonlinear optimization is complex and unpredictable, and therefore deterministic approaches may he impossible or inappropriate due to the assumptions made ahout the continuity and differentiability of the objective function [6]. (31. Recently, there has'heen considerable interest in employing stochastic methods to overcome CNOPs. and this has highlighted the need for solid EA strategies for coping with infeasible solutions.

2 Particle Swarm Optimization
Particie swarm 'optimization' was introduced by Kennedy and Eberhart in 1995 [TI, IS]. It is'an evolutionary computation technique which is stochastic'in nature. and models itself on flocking behaviour; such as that of birds. A random initialization process provides a population of possible solutions, and through successive generations the optimum result is found. Throughihese generations, each'"partic1e". or potential solution, evolves. In the global version of PSO. each particle knows its best position so far (pBest), and the best position so far-among the entire group (@Best). The basic equations utilised during this evolutionary process are given below: '.' , :
K d
21:

x Kd

+ Clrand x (P<d - Z j d ) t
X (pgd

+C2l'lllld

- .Cid)

(1)

0-7803-7804-0 /03/$17 00 0 2003 I€€€

2419

Hence. . U .d + V.O)]. or to distinguish between the hest infeasible solutions and the worst feasible ones. as discussed later.1 Preservation of Feasible Solutions Method The preservation of feasible solutions method (FSM) for constraint handling with PSO was adapted from [I21 by Hu and Eberhart in [2]. the constraints are ef. The rejection of infeasible solutions is distinct from the penalty function approach. is the inertia weight. [IO].2 Penalty Function Method The focus of this paper is on the first two methods. it is yet to he proven as a practical technique to solve CNOPs. based on its previous position and new velocity. This penalty function occurs. 1':d. x. This approach has its drawbacks in that the decision must he made as to whether to consider that some infeasible solutions may he "better" than some feasible one [ I I ] . Solutions are discarded if they do not satisfy the constraints of the problem. and pg. i t has been noted in [6] that too high a penalty will force the CA to find a feasible solution even it is not optimal. dynamic range of the particle on each dimension (problemspecific). the optimization problem becomes one 01 method. some of celerate the iterative process of tracking feasible solutions which are listed below: hy forcing the search space to contain only solutions that Methods based on penalty functions. and the system may never converge.49445. Population sizes of 30 are also used (for all hut one test case. + the constraint was violated. are used for all experiments in this paper. Conversely. whereby infeasible membersof a population are simply rejected.d/Z. Penalty functions lace the difficulty of maintaining a balance between obtaining feasibility whilst finding optimality. multi-stage penalty function method have already shown some potential when implemented with (PFM) for constraint handling with PSO was implemented PSO.violationIs. as suggested in [41. The application of PSO to various optimization problems has been reported extensively in the literature [7]. ues (or IBest for the local version). whilst maintaining an emphasis on optimality. Hence. no search can begin until the Methods based on repair algorithms. requires a rigorous minimizing the objective function and the penalty together. only particles which remain in the Several methods in overcoming constraints associated with feasible space are counted for the new pBest and gBest valCNOPs with EAs have been reported. tions. According to [I I].checking process after evaluation of the objective function to ascertain the extent of any constraint.p. The above numeric'al parameters. Penalty functions are a traditional means for solving CNOPs.d is the position at which the particle has achieved its hest fitness so far.1 Constraint-Handling with Particle Swarm Optimization 3. When using this approach. and is set to j0. 3. this method has been applied with PSO. Sta. The obvious drawback of Methods based on the rejection of infeasible solu. Methods hased on specialized operators.1.by Parsopoulos and Vrahatis in [31.d (2) Where in Equation ( I ) . and V . This method is incorporated in evolutionary strategies (ESs).this method is that for CNOPs with extremely small feasible space. is set to the . When implementing this technique into the global version of PSO. Penalty function methfectively removed. with some excellent results. which uses 50).j is the particle's next position. Whilst PSO has the advantages of speed and simplicity when applied to a wide range of problems.. however it has been stated in [6] that the restoration process itself can represent a task similar in complexity to the optimization problem itself. tion occurs. the initialization process may he impractically long. [9]. Hence.1.j is the position at which the hest global fitness has been achieved so far. with the maximum number of generations being set to either 500 or 5000 depending on the complexity of the prohlem. the initialisation process involves 3 Constraint-Handling Methods for Evolution. or almost impossible. PSO also has a local version. Vldis the particle's new velocity. Recently. Methods hased on behavioural memory. In the case of GAS. d o not violate any constraints. as opposed to non-stationary penalty functions which add a penalty proportional to the amount with which 2420 . and a penalty is added to the objective ods require the fine-tuning of the penalty function paramfunction value (in a minimization problem) when a violation eters. Penalty functions can he stationary or non-stationary. solution space is completely initialized. From Equation ( 2 ) .5 (ran. Upon evaluation of the objective function. They have been employed widely in GAS.forcing all particles into the feasible space before any evalary Algorithms uation of the objective function has begun. if the penalty is small. to discourage premature convergence. followed tionary penalty functions add a fixed penalty when a viola. The references 171 and (81 provide a further explanation of PSO operation. the ability of PSO in dealing with constrained problems needs a thorough investigation. and are also a function of the iteration number. The acceleration coefficients ci and c2 are both set to 1. by the assignment and summation of any penalties applied. The idea here is to acthese strategies fall into several distinct areas. which can occur in the global version of PSO when applied to certain types of problems. where each particle has a knowledge of the hest position obtained within its own neighhourhood (IBest) instead of the best position obtained globally. This version of PSO is often used to overcome problems associated with the swarm petting trapped in local optima.Z. the emphasis on feasibility is thereby reduced. when implemented with PSO. according to 121.d = Z. which A non-stationary. 3. Repair algorithms seek to restore feasibility to the solution space.

51 + 4. [5]and 161 for comparative purposes.particles was used.. 0 0 5 t i 5 8 ~ 2 ~ 5 o. FSM and PFM. 90 5 80. 27 5 i : j 5 45. wherej = 4 .33 5 r2 5 45.q2.3. and H ( s ) is a penalty factor. else i f q i ( r ) 5 0. For all prohlems except Test Problem a population of 30 . the same initialization range was used for both constraint-handling methods. and the hounds: i X 5 z1 5 102. .(5)) i=l (4) Where q!(z)= mnr{O.j 0 0 2 9 9 5 ~ ~ : ~ ~ + x +0..4 Test Pmblem 4: If qi(z)< 0. 5 Best known solution: . Tables 3 and 4 show the average optimal solution obtained by each method.. 8 Best known solution: f * = 7019. and the hounds: 13 5 z1 5 100:O 5 :z2 5 100 Best known solution: f = 49G1.21 Twenty runs of 500 iterations were performed on each problem.( X I .io. yj(c) is a relative violated function of the conis is straints: B ( q i ( r ) ) an assignment function.l 5 0 Best known solution: .2)2 (z2 . and 10 5 5 1000. As per [3]. 1 + z .835G891q.82. f (3:) = 5.. else . z ES c R" H ( 2 ) = cB(yi(z))qi(z)yiq. a significantly larger value of S(q. followed in hrackets by f ( ~= ) + + 242 1 .I .nni9n8R. 6 . f ( z )= (21 .~ ( y i ( 2 ) = 1. ( s ) < 1. which comprise a selection of. meaning that a constraint was only considered violated if gz(z)> lo-'.f* = 1.81381 ' 5 Results The two constraint-handling methods. a population of 50 particles was used in conjunction with 5000 iterations.16. B(qi(z)) 10.onn6ztiz~12. 5 25. The test problems used were as follows: 4. 5 . also from [ 3 ] .l:3. 5 110.(z)) was used. and -/(qi(i)) the power of the penalty function [ 3 ] . 0 0 1 2 5 4 7 ~ ~ ~ ~ + o.20)3 Subject to: 100 .51249 0 . gt(z) are the i constraints. m.1. h ( k ) is a penalty value which is modified according to the algorithm's current iteration number.35785472.gi(z)}. The aim of the experiments was to directly compare the rate of convergence and accuracy of results of the two constraint-handling methods. z ~ / .I)* Subject to: 21 = 2 z * . = I.002205523r5 5 92.5)2 5 0: ( I C. originally from [I31 was: (3) Where f ( 2 ) is the original objective function to be optimized.53867 + + + + + + 4 Experimental Methodology Five well-known CNOP test problems were used in our experiments.30096i+ 0.004i026~:3z5 0 . B(qi(. 2 9 3 2 3 9 ~ ~ -40792.rj 3 7 .002181~.141 Subject to: 0 5 85. 4 .1. where i = 3 .393.512 . and the following values were used for the penalty function in all cases except Test Problem 5 : I f q .2 Test Problem 2: . 0 0 i ~ 3 ~ i x ~ 0 . in order to assess each method's suitability for application to real-world problems.1 Test Problem 1: where i = 2.-0. Tables I and 2 show the hest known solutions to the test problems followed hy the hest snlution obtained by FSM and PFM respectively over the 20 runs. = =.3 Test Problem 3: F ( z )= f(:r) + h(k)H(z).else if q t ( z ) 5 1.6)' ~ (z2 .1013 (22 .r)) = 100. and the average.001. were applied to the above five test problems.334407 0 . otherwise B(qi(z)) 300: = The penalty value h ( k ) was set to h ( k ) = 4 for Test Problem I and h ( k ) = k& for the remaining test problems. . For Test Problem 5. 20 5 9. due 4. This was established only by trial and error. For each test problem.(qi(z)) = 2: ) B(qi(z)) 4. hest and worst solutions noted.f* = -3OtiB5.( 2 7 . For Test Problem 5. [2].minimization problems common to those used in [31.5 Test Problem 5 : to the inability of the previous value of Q(qi(x))to encourage any convergence. .81 5 0. for the experiments in this paper a violation tolerance was implemented.and is defined as: rn 4.~ The penalty function used in [31. + 0.. 7 .

274458) "No feasihle space was initialized hy FSM for this test problem. Table 2: Comparison of best known solutions and hest optimal solutions ohtained hy PSO using PFM over 20 runs. it can he almost impossihle to randomly generate an initial set of feasible solutions. and the average sum of violated constraints (SVC) for the P F M From the results. hence the FSM was considered impractical for application to this problem. It can he seen from Table 3 that the FSM may he inappropriate for CNOPs which have an extremely small feasible space (such as Test Problem 2. whose relative size of feasible space is < O. As canhe seen in Figure 1. The FSM achieved an optimum result very close to the hest known solution. clearly displayed on the same graph 5. Table 3: Average optimal solutions obtained by PSO using FSM over 20 runs.267) (732) (16) 2422 .091 (MA) 1 (60. .09383) (0. *A population of 50 was used and maximum iterations was set to 5000.3934650 -6961. a rapid convergence occurs during the first 2-3 iterations.75298 -30665. . The PFM achieved a better optimum result and a better average optimum than the FSM.3934649 Table 4: Average optimal solutions obtained by PSO using PFM over 20 runs. [ 2 ] ) .008 8763. 1.Table 1 : Comparison of hest known solutions and best optimal solutions obtained hy PSO using FSM over 20 runs.0066% 121. 1.00062) (0.7 184 (0. In such cases. the graphs represent convergence over the number of iterations for which greatest convergence occurs for each test prohlem. Graphical results of the convergence of each method on each test problcm are shown below.4960 -30648. the average maximum attempts (MA) required to initialize a particle into the feasible space for the FSM. Test Problem I was the only case where the initial values of each respective method allowed the results to he \ . 5. It should be noted that for the sake of clarity.1 Test Problem 1 The rate of convergence and accuracy of both methods applied to this prohlem were very competitive.4 I55 680. with a significantly faster convergence rate.2 Test Problem 2 This problem is a cuhic function whose relative size of feasible space is 0.00000) (0. This problem was particularly apparent in Test Problem 5.OOGG%. as can he seen hy comparing Figures 2 and 3. where it took an inordinate amount of time to initialize less than IOW of the particles into the feasible solution space. ~ ~ ~ . it can he seen that thc accuracy of the techniques at finding near-optimal solutions is extremely competitive. ~ ~ ~ ~ ~ . I Test 1 FSM -6960. however the average optimum results of the two methods over 20 runs were identical.

with its SVC also being lower. Figure 5: convergence of PFM for Test Problem 3 over first IO0 iterations .. perform better with this test problem than in 1-11. It didn't perform quite as well as the FSM with its average optimum over 20 runs. Figure 6: Convergence of FSM for Test Problem 4 over 500 iterations Figure 4: Convergence of FSM for Test Problem 3 over first 100 iterations Figure 7: Convergence of PFM for Test Problem 4 over 500 iterations 2423 . >o 5. [2]). The PFM did however.4 Test Problem 4 This test problem is a quadratic function which has a comparatively large relative size of feasible space (52.1230%. achieving the better average optimum. for which both methods showed convergence over a similar number of iterations.3 Test Problem 3 Both methods showed rapid convergence within the first I O iterations. This different result may be attributable to the difference of PSO parameters used. however the PFM showed a significantly faster rate of convergence with a slightly better optimum result. however the FSM performed consistently well.~ Figure 2 : Convergence of FSM for Test Problem 2 over first 100 iterations i. The PFM outperformed the FSM in terms of the optimum value found. Figure -1: Convergence of PFM for Test Problem 2 over first 100 iterations 5.

the FSM failed to initialize the search space with feasible solutions due to the binding constraints.. "Engineering optimization with particle swarm. eds. Hu and R. vol. When implementing the PFM with this test problem. hence the FSM was found to he impractical for this case. design of a pressure vessel and a welded beam [41. 2003. In the other 2 test problems.'' in Proceedings of the 2003 IEEE Swarni Intelligence Syrposiuni. The two methods were found to he extremely competitive. [2] X. the penalty values which had worked well for all of the other test problems were changed to have a larger penalty for large violations (chosen by trial and error). and implementation of the local version of PSO with test problems which are inclined to get stuck in local optima (such as Test Problem 5 ) . Cybenietics and Iilforniarics. to ensure convergence. and the hest. "Solving constrained nonlinear optimization problems with particle swarm optimization. 6 Applications PSO has already heen applied to many real-world CNOPs. with optimization tasks as varied as power system operation [IO]. internal combustion engine design [14]. 2002. Back. Eherhart. and T. hence the poor average optimum result. Parsopoulos and M. and Y. where GASwere used. [31 K. 5357. Fogel." in Proceedings of the 6th World Multiconfererice on Systemics. the choice of constraint-handling method is very problemdependent. Eberhart. pp. some problems may require the rapid convereence characteristic shown by the PFM in some of the test cases. Institute of Physics. which would require repeated evaluation of the objective function itself during the initialization process. 2000. Acknowledgments Figure 8:Convergence of PFM for Test Problem 5 over 500 iterations The authors would like to greatly acknowledge the Rowden White Foundation for their generous financial support of this research. This test prohlem was hy far the hardest to implement for both techniques.and average solutions noted. pp. Convergence did not occur with every run. ELY1urioriar:v Conrputation. "Particle swarm optimization method for constrained optimization prohlems:' in Intelligent techirologies . I . 16 of Frontiers in Artificial lritelligerice andilpplicariorrs. its average optimal solution is comparatively poor. Hu. which has shown excellent preliminary results. however the rate of convergence of the penalty function method (PFM) was found to he significantly faster than that of the feasible solutions method (FSM) in 2 out of the 4 test problems considered.theon and applications: new trends in intelligent technologies. 2424 .10s Press. the rates of convergence were comparable. worst . This is a real-world CNOP which has both equality and inequality constraints. where its optimal solution is good hut. '1'"' was done with a view to providing readers with some idea of the accuracy and convergence rate of each method when selecting which one to apply to real-world CNOPs. or implementation of the local version of PSO to overcome problems of the swarm getting stuck in local optima. whereas others may have constraints which are suitable to evaluate during the initialization process. Finetuning of the PFM parameters may result in better average optimal solutions for the PFM. Shi. 2002. biomedical applications [ I S ] . These methods were directly compared using identical swarm parameters through simulations. The authors would also like to acknowledge the assistance of Asanga Ratnaweera for his helpful suggestions in preparing this paper. D. 7 Conclusions and Further Work The purpose of this contribution was to assess the performance of two existing constraint-handling methods when used with particle swarm optimization (PSO) to solve constrained nonlinear optimization problems (CNOPs). and of the remaining 3. the FSM would he inappropriate and impractical due to the requirement of having to preprocess functional constraints. Michalewicz. The fifth test problem could not he compared due to the FSM struggling to satisfy all of the constraints during the initialization process. may also improve results. Vrahatis. the FSM found the hetter average optimum result in 2 of them. The authors have recently applied PSO using the PFM in this paper to the optimal power flow problem. Difticulties finding the optimal result for this particular test problem were also noted in [6]. 214-220.5. The accuracy of the average optimal result found by each method was identical in I out of the 4 test cases.5 Test Problem 5 For this problem. R. Proper finetuning of the penalty function parameters may result in a better average optimum. vol. In this case. Bibliography [ I ] T. Hence. This 141 X. as can he seen in Tables 2 and 4.

264-268. 87-93. f vol. 2425 . 187 of Lecture mtes in ecorioniics arid niathemarical s w e n i s . 19-44.1998." in Proceedings o the 1999 Congress on Evolurionan Computation. "Particle swarm optimisation. Micha1ewicz. "Using selection to improve particle swarm optimization. 1995. Horng. vol. 39-43. Sprinp-Verlag. [ 151 R. Chen. I131 I. Watson. and C . Michaelewicr. 3 . Applyirigfaniily conipetiriori to evolurion strategies f o r constrained optinii:atiorr.[51 W." in Proceedings of the 1st International Conference or1 F t q Systems and Knowledge Disco\. Kennedy and R. vol. S. Eherhart and I. Kao.2002." i n Proceedings ofthe Firsr IEEE Conference on Evolutionay Contputarion. 181 R." in-Proceediiigs of the IEEE Itirerttariotial Coirfererice 011 Neural Networks. 1999. [91 P. Evolutiotian olgorirlinrs honionrorphous niappirigs arid consrrairied puranteter optinrimion. 1995. Eberhart. 84-89. pp. SpringerVerlag." Proceedings of the IEEE Corrgress 011 Evolutionan Conipiitation. 1213 of Lecture notes in conipurer science. vol.e n . Fukuyama and H. "On the use of non-stationary penalty functions to solve nonlinear constrained optimization prohlems with CA's." in Proceedings of the IEEE World Congress on Conipiitarioiial lritelligettce. 7. I. pp. Yoshida. 4. [7] J. pp. Y. Joines and C. Houck. Kennedy. Hock and K. 1997. pp. [ 141 A. "Particle swarm optimization for reactive power and voltage control in electric power systems. Xiaohui. Ratnaweera. pp. II I ] Z. 1999. vol.A survey ofconstraint handlingtechniques in evolutionary computation methods. June 1994. Eherhart and H. 1942-1948. "A new optimizer using particle swarm theory:' in Proceedirigs of rlie 6th Irrrernariotial Syntposium on Micro Machine and Huniari Science. 1995. pp. 2. [IO] Y.2001. 1981. [I21 S. "Human tremor analysis using particle swarm optimization. 135-155. Schittkowski. "Parti- cle swarm optimization with self-adaptive acceleration coefficients. a n Progranintitig. Halgamuge. vol. vol. pp. and H. 161 I. 579-584. Test exaniplesfor notilirtear progranimbrg codes. Koziel and Z. pp. Yang. 1 ." in Proceedirrgs of the 4th Annual Curifererice on Rwltiriori. Angeline.