(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No.

8, November 2010

Combined Algorithm of Particle Swarm Optimization
Narinder Singh Department of Mathematics, Punjabi University, Patiala INDIA,( Punjab)-147201 Email:narindersinghgoria@ymail.com S.B. Singh Department of Mathematics, Punjabi University, Patiala, INDIA,( Punjab)-147201 Email:sbsingh69@yahoo.com J.C.Bansal ABV-Indian Institute of Information Technology and Management-Gwalior (M.P), INDIA Email:jcbansal@yahoo.com

Abstract: A new optimization algorithm is developed in this paper as a Combined Algorithm of particle swarm optimization, is presented, based on a novel philosophy by modifying the velocity update equation. This is done by combined two different PSO algorithms i.e., Standard Particle Swarm Optimization and Personal Best Position Particle Swarm Optimization. Its performance is compared with the standard PSO (SPSO) by testing it on a set 15 of scalable and 13 non-scalable test problems. Based on the numerical and graphical analyses of results it is shown that the CAPSO (Combined Algorithm of Particle Swarm Optimization) outperforms the SPSO (Standard Particle Swarm Optimization), in terms of efficiency, reliability, accuracy and stability. Keywords: Particle Swarm Optimization, CAPSO (Combined Algorithm of Particle Swarm Optimization), global optimization, velocity update equation, Personal Best Position Particle Swarm Optimization. I. INTRODUCTION Standard Particle Swarm Optimization (SPSO): Particle swarm optimization (PSO) [1], [2] is a stochastic, population-based search method, modeled after the behavior of bird flocks. A PSO algorithm maintains a swarm of individuals (called particles), where each individual (particle) represents a candidate solution. Particles follow a very simple behavior: emulate the success of neighboring particles, and own successes achieved. The position of a particle is therefore influenced by the best particle in a neighborhood, as well as the best solution found by the particle. Particle position xi are adjusted using

where the velocity component, vi (t ) represents the step size. For the basic PSO.
ˆ vij (t +1) = vij (t) + c1r1 j ( yij − xij ) + c2r2 j ( y j − xij )

... (2)

where w is the inertia weight [11], c1 and c2 are the acceleration coefficients, r1 j , r2 j ∼ U (0,1) , yij is the personal best position of particle i , and y j ˆ is the neighborhood best position of particle i . The neighborhood best position yi , of particle

i

depends on the neighborhood topology used [3], ˆ [4]. If a star topology is used, then yi refers to the best position found by the entire swarm. That is,
yi ∼ { y0 (t ), y1 (t ), ...., y s (t )} = min( f ( y0 (t )), f (y1 (t)), ...., f ( y s (t ))

where s is the swarm size. The resulting algorithm is referred to as the global best PSO. For the ring topology, the swarm is divided into overlapping neighborhoods of particles. In this case, yi is the best position found by the ˆ neighborhood of particle i . The resulting algorithm is referred to as the Local best PSO. The Von Neumann topology defines neighborhoods by organizing particles in a lattice structure. A number of empirical studies have shown that the Von Neumann topology outperforms other neighborhood topologies [4], [5]. It is important to note that neighborhoods are determined using particle indices, and are not based on any spatial information. A large number of PSO variations have been developed, mainly to improve the accuracy of solutions, diversity, and convergence behavior [6], [7]. This section reviews those variations used in this study, from which concepts have been borrowed to develop a new, parameter-free PSO algorithm.

xi ( t + 1) = xi ( t ) + vi ( t + 1)

…(1)

270

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 8, November 2010

Van den Bergh and Engelbrecht [8], [9], and Clerc and Kennedy [3], formally proved that each particle converges to a weighted average of its personal best and neighborhood best positions. That is, ˆ c y + c 2 y ij …….. (3) lim x = 1 i j
x→ ∞ ij

OR
vij ( t + 1) = 2 wvij ( t ) + 2 c1r j ( yij − xij ) 1

…… (5)

ˆ + c 2 r2 j ( y j − 2 xij )

c1 + c 2

This theoretically derived behavior provides support for the barebones PSO developed by Kennedy [10], where the velocity vector is replaced with a vector of random numbers sampled from a Gaussian distribution with the mean defined by equation (3), assuming that c1 =

In the velocity update equation of this new PSO the first term represents the current velocity of the particle and can be thought of as a momentum term. The second term is proportional to the vector 2 c1r1 j ( yij − xij ) , is responsible for the attractor of particle’s current position and positive direction of its own best position (pbest). The third term is proportional to ˆ the vector c2 r2 j ( y j − 2 xij ) , is responsible for the attractor of particle’s current position.

c2 , and deviation,
σ
=
ˆ yij − yij

The velocity equation changes to
v ij ( t + 1 ) ∼ N ( ˆ y ij + y ij
2 ,σ )

The position update then changes to
x i (t + 1) = v i (t + 1)

Figure:-I: Comparative movement of a particle in SPSO and CPSO
CAPSO

Kennedy [9] also proposed an alternative version of the barebones PSO, where
yij   vij (t + 1) =  ˆ yij + yij ,σ ) N (  2 if U (0,1) ≺ 0.5
otherwise
Pbest

SPSO

..(4)
Gbest

Based on equation (4), there is a 50% chance that the j-th dimension of the particle dimension changes to the corresponding personal best position. This version of the barebones PSO biases towards exploiting personal best positions. Silva et al. (2002) presented a predatorpray model to maintain population diversity. II. THE PROPOSED COMBINED ALGORITHM OF PSO The motivation behind introducing CAPSO is that in the velocity update equation instead of comparing the combined two difference PSO algorithm update velocity equation i.e., SPSO (Standard Particle Swarm Optimization) and PBPPSO (Personal Best Position Particle Swarm Optimization). Thus, we introduce a new velocity update equation as fellows:
ˆ vij ( t + 1) = wvij ( t ) + c1r j ( yij − xij ) + c 2 r2 j ( y j − xij ) + 1 wvij ( t ) + c1 r j ( y ij − xij ) + c 2 r2 j ( − xij ) 1

Current position

The pseudo code of CAPSO is shown below: ALGORITHM- CAPSO For t = 1 to the max: bound of the number on iterations, For i = 1 to the swarm size, For j = 1 to the problem dimensionality, Apply the velocity update equation (5); Update Position using equation (2); End-for- j Compute fitness of updated position; If needed, update historical information for personal best position and global best position; End-for-i; Terminate if global best position meets problems requirements; End-for- t ; END ALGORITHM

271

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 8, November 2010

III. THE TEST BED Many times it is found that the evaluation of a proposed algorithm is evaluated only on a few benchmark problems. However, in this paper we consider a test bed of thirty benchmark problems with varying difficulty levels and problem size. The relative performance of SPSO and CAPSO is evaluated on two kinds of problem sets. Problem Set 1 consists of 15 scalable problems, i.e., those problems in which the dimension of the problems can be increased / decreased at will. In general, the complexity of the problem increases as the problem size is increased. Problem Set 2 consists of those problems in which the problem size is fixed, but the problems have many local as well as global optima. The Problem Set 1 is shown in Table 1 and Problem Set 2 is shown in Table 2.

Table-1: Details of Problem Set-I (Continued) Serial No 1. Function Name Ackley Expression
n Min f ( x ) = − 20exp( −0.02 n −1 ∑ xi2 ) i =1 −1 n ∑ cos(π x )) + 20 + e i i =1 n n Min f ( x ) = −0.1 ∑ cos(5π xi ) + ∑ xi2 i =1 i =1 n Min f ( x ) = ( −0.5 ∑ xi2 ) i =1 n n x 1 ∑ x 2 − ∏ cos( i ) Min f ( x ) = 1 + 4000 i =1 i i i =1 n Min f ( x ) = 10 n + ∑ [ xi2 −10cos(2π xi )] i =1 n −1 Min f ( x ) = ∑ [100( xi +1 − xi2 ) 2 + ( xi −1) 2 ] i =1 n n i n i 2 4 Min f ( x ) = ∑ xi2 + [ ∑ ( ) xi ] + [ ∑ ( ) xi ] i =1 i =1 2 i =1 2
− exp( n

Search Space
−30 ≤ xi ≤ 30

Objective Function Value 0

2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15.

Cosine Mixture Exponential Griewank Rastrigin Function ‘6’ Zakharov’s Sphere Axis parallel hyper ellipsoid Schwefel ‘3’ Dejong Schwefel ‘4’ Cigar Brown ‘3’ Function ‘15’

− 1 ≤ xi ≤ 1

− 0 .1 × ( n )

− 1 ≤ xi ≤ 1

-1 0 0 0 0 0 0 0 0 0 0 0 0

−600 ≤ xi ≤ 600 −5.12 ≤ xi ≤ 5.12

− 3 0 ≤ xi ≤ 3 0
− 5 .1 2 ≤ x i ≤ 5 .1 2

n Min f ( x ) = ∑ xi2 i =1
M in f ( x ) =
M in f ( x ) =

−5.12 ≤ xi ≤ 5.12
i x i2

i=1

n

−5.12 ≤ xi ≤ 5.12
xi

i =1

n

xi +

i =1

n

−10 ≤ xi ≤ 10
− 1 0 ≤ xi ≤ 1 0

M in f ( x ) =

i =1

n

( x i4 + r a n d ( 0 , 1 ) )

Min f ( x ) = Max{ xi , 1 ≤ i ≤ n}
M i n f ( x ) = x i2 + 1 0 0 0 0 0
M in f ( x ) =

−100 ≤ xi ≤ 100
x i2

i=1

n

−10 ≤ xi ≤ 10
− 1 ≤ xi ≤ 4

∑ [( x
i =1

n −1

2 i

)( x i2+ 1 + 1) + ( x i2+ 1 + 1)( x i2 + 1)]

M in f ( x ) =

n −1 i =1

[ 0 . 2 x i2 + 0 . 1 x i2 s i n 2 x i ]

−10 ≤ xi ≤ 10

Table 2: Details of Problem Set-II Serial No 1. 2. Function Name Becker and Lago Bohachevsky ‘1’ Expression
M i n f ( x ) = ( x1 − 5 ) 2 + ( x 2 − 5 ) 2
2 Min f ( x ) = x12 + 2 x2 − 0.3 cos(3π x1 ) − 0.4 cos(4π x2 ) + 0.7

Search Space
− 1 0 ≤ xi ≤ 1 0

Objective Function Value 0 0

−50 ≤ x1 , x 2 ≤ 50

272

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 8, November 2010

3. 4.

Bohachevsky ‘2’ Branin

2 Min f ( x) = x12 + 2 x2 − 0.3cos(3π x1 ) cos(4π x2 ) + 0.3

−50 ≤ x1 , x 2 ≤ 50 −5 ≤ x1 ≤ 100 −5 ≤ x 2 ≤ 15

0 0.398

Min f ( x) = a( x2 − bx12 + cx1 − d )2 + g(1− h)cos( x1 ) + g
a = 1, b =
5 .1 5 1 , c = , d = 6, g = 10, h = 4π 2 π 8π

5. 6. 7.

Eggcrate Miele and Cantrell Modified Rosenbrock

2 2 2 2 Min f ( x ) = x1 + x 2 + 25(sin x1 + sin x 2 )
4 6 4 8 Min f ( x) =(exp( x1 ) − x4 ) + 100( x2 − x3 ) + (tan( x3 − x4 )) + x1 2 2 2 2 Min f ( x ) =100( x 2 − x1 ) + [6.4( x 2 − 0.5) − x1 − 0.6]

− 2π ≤ xi ≤ 2π

0 0 0

− 1 ≤ xi ≤ 1

−5 ≤ x1 , x 2 ≤ 5

8.

Easom

Min f ( x ) = − cos( x1 ) cos( x 2 ) exp( − ( x1 − π )

2

2 − ( x2 − π ) )

−10 ≤ xi ≤ 10

-1

9. 10. 11. 12. 13.

Periodic Powell’s Camel back-3 Camel back-6 Aluffi-Pentini’s

Min f ( x ) = − 1 + sin

2

x1 + sin

2

2 2 x 2 − 0.1 exp( − x1 − x 2 )

−10 ≤ xi ≤ 10
− 10 ≤ xi ≤ 10

0.9 0 0 -1.0316 -0.3523

2 2 4 4 Min f ( x) =(x1 + 10x2 ) + 5( x3 − x4 ) + ( x2 − 2x3 ) + 10( x1 − x4 )

M in f ( x ) = 2 x 12 + 1 .0 5 x 14 +

1 6 2 x1 + x1 x 2 + x 2 6

−5 ≤ x1 , x 2 ≤ 5
−5 ≤ x1 , x 2 ≤ 5

2 4 1 6 2 4 Min f ( x ) = 4 x1 + 2.1 x1 + x1 + x1 x 2 − 4 x 2 + 4 x 2 3 4 4 2 2 Min f ( x ) = 0.25 x1 − 0.5 x1 − 0.5 x1 + 0.1 x1 + 0.5 x 2

−10 ≤ xi ≤ 10

IV. ANALYSES OF RESULTS The SPSO and the CAPSO are coded in C++ and implemented on Pentium-IV 2.4 GHz machine with 512 MB RAM under WINXP platform. Thirty independent runs with different seed for the generation of random numbers are taken. However, the same seed is used for generating the initial swarm for SPSO and CAPSO for the i th run, where i = 1, 2,..., 50 . A run is said to be a successful run if the best objective function value found in that run lies within 1% accuracy of the best known objective function value of the problem. The maximum number of function evaluations is fixed to be 30,000. The swarm size is fixed to 20 and dim is 30. The inertia weight is 0 .7 and the acceleration coefficients for SPSO and CAPSO are set to be c = c = 1.5 . 1 2 A number of criterions are used to evaluate the performance of SPSO and CAPSO. The percentage of success is used to evaluate the reliability. The average number of function evaluations of successful runs and the average computational time of the successful runs, are used to evaluate the cost. For problem SET-I, by fixing for problem measured by the minimum, mean, success of rate and standard deviation of the objective function values out of fifty runs. This is shown in Table 3. The corresponding information for problem SET-II is shown Table 4, respectively. In observing Table 3, it can be seen that CAPSO gives a better quality of solutions as compared to SPSO. Thus, for the scalable problems CAPSO outperforms SPSO with respect to efficiency, reliability, cost and robustness. In observing Table 4, it can be seen that CAPSO gives a better quality of solutions as compared to SPSO. Thus, for the non-scalable problems CAPSO outperforms SPSO with respect to efficiency, reliability, cost and robustness. In Table 3, It is observed that SPSO could not solve two problems with 100% success, whereas CAPSO solved all the problems with 100% success.

273

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 8, November 2010

Table-3 Comparative Objective function value obtained in 50 runs by SPSO and CAPSO for problem Set-I Problem No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Minimum Function Value SPSO CAPSO 0.667619 0.271435 0.644392 0.279915 0.000000 0.000000 0.777974 0.422415 27.127816 0.266949 0.000061 0.000001 0.000274 0.000003 0.685057 0.295695 0.000002 0.000001 0.001109 0.001107 0.601870 0.098892 0.022248 0.006012 0.001848 0.001248 0.000126 0.000108 0.000009 0.000001 Mean Function Value SPSO 16485.6000 1708.20000 60.000000 14364.6000 30000.0000 166.200000 72.000000 6096.00000 60.600000 60.600000 11341.8000 78.000000 1767.00000 60.000000 60.000000 CAPSO 2331.0000 193.20000 60.000000 5998.8000 19504.800 141.00000 76.800000 864.00000 63.600000 60.600000 5126.4000 87.000000 1767.0000 60.000000 60.000000 Standard Deviation SPSO 0.142795 0.053545 0.000207 0.026005 29.809592 0.200616 0.229660 0.054336 0.179978 0.161759 0.067786 0.243564 0.253535 0.048579 0.005729 CAPSO 0.139467 0.129501 0.000080 0.117862 41.94101 0.285178 0.233736 0.155304 0.219324 0.180857 0.227513 0.253503 0.273535 0.053570 0.004014 Rate of Success SPSO 98.00% 100% 100% 100% 0.00% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% CAPSO 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%

Table-4 Comparative Objective function value obtained in 50 runs by SPSO and CAPSO for problem Set-II Problem No. 1 2 3 4 5 6 7 8 9 10 11 12 13 Minimum Function Value SPSO CAPSO 0.500000 0.500000 0.017193 0.002786 0.001029 0.001024 0.398600 0.395682 0.018613 0.002431 0.498600 0.398600 0.027193 0.017786 0.015341 0.012461 0.480507 0.480489 0.067997 0.051277 0.003378 0.002978 0.005549 0.003824 0.002655 0.002017 Mean Function Value SPSO 60.000000 64.200000 66.600000 128.400000 72.000000 128.400000 64.200000 82.200000 60.000000 840.600000 60.600000 63.600000 65.400000
C -S P S O B -C A P S O

Standard Deviation SPSO 0.042453 0.258362 0.224219 0.137710 0.240972 0.167710 0.358362 0.281294 0.026709 0.215576 0.207517 0.270722 0.229666 CAPSO 0.042452 0.248258 0.257928 0.148115 0.221812 0.147710 0.288258 0.256433 0.021144 0.253873 0.246517 0.238520 0.181436

Success of Rate SPSO 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% CAPSO 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
C -S P S O B -C A P S O

CAPSO 60.000000 62.400000 67.800000 175.200000 72.600000 120.400000 62.400000 95.400000 60.000000 517.200000 64.600000 66.600000 60.000000

2 .0 1 .8

0 .6

Minimum Function Values

1 .6 1 .4 1 .2 1 .0 0 .8 0 .6 0 .4 0 .2 0 .0
0 .0 Minimum Function Values 0 .4

0 .2

0

1

2

3

4

5

6

7

8

9

10 11 12 13 14 15

0

1

2

3

4

5

6

7

8

9

10

11

12

13

S c a la b le P r o b le m s

N o n - S c a la b le P r o b le m s

Figure A: Comparing the SPSO and CAPSO with the help of Scalable 15 Problems SET-I.

Figure B : Comparing the SPSO and CAPSO with the help of Non-Scalable 13 Problems SET-II.

274

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 8, November 2010

V. CONCLUSIONS This paper presented new population based algorithm CPSO (Combined Algorithm of Particle Swarm Optimization). It is represent based on a combined two different particle swarm optimization algorithms i.e., Standard Particle Swarm Optimization and Personal Best Position Particle Swarm Optimization. It is tested on 15 scalable problems and 13 nonscalable problems. It is shown that the new CAPSO (Combined Algorithm of Particle Swarm Optimization) outperforms SPSO (Standard Particle Swarm Optimization) in terms of efficiency, accuracy, reliability and robustness. Particularly for large size problems CAPSO outperforms SPSO. In this paper the effect of change of parameters in CAPSO is not explored. In a future study parameters fine tuning may be carried out for better performance. Also the application of CAPSO to the real word problems would be interesting as a future research. REFERENCES [1] R.C. Eberhart and J. Kennedy. A New Optimizer using Particle Swarm Theory. In Proceedings of the Sixth International Symposium on Micromachine and Human Science, pages 39–43, 1995. J. Kennedy and R.C. Eberhart. Particle Swarm Optimization. In Proceedings of the IEEE International Joint Conference on Neural Networks, pages 1942–1948. IEEE Press, 1995. J. Kennedy. Small Worlds and Mega-Minds: Effects of Neighborhood Topology on Particle Swarm Performance. In Proceedings of the IEEE Congress on Evolutionary Computation, volume 3, pages 1931–1938, July 1999. J. Kennedy and R. Mendes. Population Structure and Particle Performance. In Proceedings of the IEEE Congress on Evolutionary Computation, pages 1671– 1676. IEEE Press, 2002. E.S. Peer, F. van den Bergh, and A.P. Engelbrecht. Using Neighborhoods with the Guaranteed Convergence PSO. In Proceedings of the IEEE Swarm Intelligence Symposium, pages 235–242. IEEE Press, 2003. A.P. Engelbrecht. Fundamentals of Computational Swarm Intelligence. Wiley & Sons, 2005. J. Kennedy, R.C. Eberhart, and Y. Shi. Swarm Intelligence. Morgan Kaufmann, 2001. [13] [8] F. van den Bergh. An Analysis of Particle Swarm Optimizers. PhD thesis, Department of Computer Science, University of Pretoria, Pretoria, South Africa, 2002. F. van den Bergh and A.P. Engelbrecht. A Study of Particle Swarm Optimization Particle Trajectories. Information Sciences, 176(8):937–971, 2006. J. Kennedy. Bare Bones Particle Swarms. In Proceedings of the IEEE Swarm Intelligence Symposium, pages 80–87, April 2003. Y. Shi and R.C. Eberhart. A Modified Particle Swarm Optimizer. In Proceedings of the IEEE Congress on Evolutionary Computation, pages 69–73, May 1998. Angline, P.J.(1998a) ‘Evolutionary optimization versus particle swarm optimization philosophy and performance differences’, Lecture Notes in Computer Science, Vol.1447,pp.601-610, Springer, Berlin. Angline, P.J(1998b) ‘Using selection to improve particle swarm optimization’,Proceedings of the IEEE Conference on Evolutionary Computations, pp.84-89. Banks, A., Vincent, J. and Anyakoha, C. (2007) ‘A review of particle swarm optimization,Part I:Background and development’, Natural Computing: an International Journal, Vol. 6, No. 4, pp.467– 484. Banks, A., Vincent, J. and Anyakoha, C. (2008) ‘A review of particle swarm optimization, Part II: Hybridisation, combinatorial, multicriteria and constrained optimization and indicative applications’, Natural Computing: an International Journal, Vol. 7, No. 1, pp.109–124. Baskar, S. and Suganthan, P.M. (2004) ‘A novel concurrent particle swarm optimization’,Proceedings of the Congress on Evolutionary Computations, pp.792–796. Deep, K. and Thakur, M. (2007) ‘A new crossover operator for real coded genetic algorithms’, Applied Mathematics and Computation, Vol. 188, No. 1, pp.895–911. Eberhart, R.C. and Shi, Y. (2000) ‘Comparing inertia weights and constriction factors in particle swarm optimization’, Proc. Congress on Evolutionary Computation, San Diego, CA, pp.84–88,. Esquivel, S.C. and Coello Coello, C.A. (2003) ‘On the use of particle swarm

[9]

[10]

[11]

[12]

[2]

[14]

[3]

[15]

[4]

[16]

[5]

[17]

[6]

[7]

[18]

275

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 8, November 2010

[19]

[20]

[21]

[22]

[23]

[24]

[25]

[26]

[27]

[28]

optimization with multi modal functions’, Proceedings of the Congress on Evolutionary Computations, pp.1130–1136. He, S., Wu, Q.H., Wen, J.Y., Saunders, J.R. and Paton, R.C. (2004) ‘A particle swarm optimizer with passive congregation’, Biosystems, Vol. 78, pp.135–147. Hendtlass, T. (2003) ‘Preserving diversity in particle swarm optimization’, Lecture Notes in Computer Science, Vol. 2718, pp.31–40. Higasbi, N. and Iba, H. (2003) ‘Particle swarm optimization with Gaussian mutation’, Proceedings of the 2003 IEEE Swarm Intelligence Symposium, pp.72–79. Hu, X., Shi, Y. and Eberhart, R.C. (2004) ‘Recent advances in particle swarm’, Proceedings of Congress Evolutionary Computation, Vol. 1, pp.90–97. Janson, S. and Middendorf, M. (2005) ‘A hierarchical particle swarm optimizer and its adaptive variant’, IEEE Transaction on System, Man and Cybernetics, Part B, Vol. 38, pp.1272–1282. Krink, T. and Lovbjerg, M. (2002) ‘The lifecycle model: combining particle swarm optimization, genetic algorithms and hill climbing’, Proceedings of Parallel Problem solving from Nature, Vol. 7, pp.621–630. Krink, T., Vesterstrem, J.S. and Riget, J. (2002) ‘Particle swarm optimization with spatial particle extension’, Proceedings of the Congress on Evolutionary Computation, pp.1474–1479. Liang, J.J., Qin, A.K., Suganthan, P.N. and Baskar, S. (2004) ‘Particle swarm optimization algorithms with novel learning strategies’, Proceedings of the IEEE Conference on Systems, Man and Cybernetics, pp.3659–3664. Liu, H., Li, B., Wang, X., Ji, Y. and Tang, Y. (2004) ‘Survival density particle swarm optimization for neural network training’, ISNN (1), LNCS 3173, pp.332–337, Springer-Verlag. Lovbjerg, M. and Krink, T. (2002) ‘Extending particle swarm optimizers with self-organized criticality’, Proceedings of the Congress on Evolutionary Computation, pp. 1588–1593. Lovbjerg, M., Rasmussen, T.K. and Krink, T. (2001) ‘Hybrid particle swarm optimizer with breeding and subpopulation’, Proceedings of the Third Genetic and Evolutionary Computation Conference, pp.469–476.

[29]

[30]

[31]

[32]

[33]

[34]

[35]

[36]

[37]

Mohais A., Ward, C. and Posthoff, C. (2004) ‘Randomized directed neighborhood with edge migration in particle swarm optimization’, Proceedings of the IEEE Conference on Evolutionary Computational, pp.548–555. Pasupuleti, S. and Battiti, R. (2006) ‘The gregarious particle swarm optimizer (GPSO)’, Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation (Seattle, Washington, USA). GECCO '06, pp.67–74, ACM, New York, NY. Peram, T., Veeramachaneni, K. and Mohan, C. K. (2003) ‘Fitness-distance-ratio based particle swarm optimization’, Proceedings of the IEEE Swarm Intelligence Symposium, pp.174–181. Poli, R., Laugdon, W.B. and Holland, O. (2005) ‘Extending particle swarm optimization via genetic programming’, Proceedings of the Eighth European Conference on Genetic Programming, pp.291–300. Riget, J. and Vesterstrem, J.S. (2002) ‘A diversity-guided particle swarm optimizer – the ARPSO’, Technical Report 2002-02, EVALife, Department of Computer Science, University of Aarbus. Shi, Y. and Eberhart, R.C. (1998a) ‘A modified particle swarm optimizer’, Proceedings of the IEEE International Conference on Evolutionary Computation, pp.69–73. Shi, Y. and Eberhart, R.C. (1998b) ‘Parameter selection in particle swarm optimization,’ 7th Annual Conference on Evolutionary Programming, San Diego, USA. Y.Fukuyama, “Parctical Equipment Models for Fast Distribution Power Flow Considering Interconnection of Distributed Generators”, Pro. Of IEEE PES Summer Meeting, 2001. F.F. Wu and A.F. Neyer, “ Asynchronous Distributed State Estimation for Power Distribution Systems”, Proc. of 10th PSCC, Aug.1990.

276