You are on page 1of 6

2018 International Conference on Frontiers of Information Technology (FIT)

Fitness-based Acceleration Coefficients to Enhance the


Convergence Speed of Novel Binary Particle Swarm
Optimization
Yasir Mehmood, Marium Sadiq Waseem Shahzad
Department of CS&IT, Department of CS&IT, Department of Computer
Mirpur University of Science and Mirpur University of Science and Science,
Technology, Technology, National University of Computer
Mirpur, AJ&K, Pakistan Mirpur, AJ&K, Pakistan and Emerging Science,
yasir.mehmood@nu.edu.pk mariumsadiq91@gmail.com Islamabad, Pakistan
Faryal Amin, waseem.shahzad@nu.edu.pk
Department of CS&IT,
Mirpur University of Science and
Technology,
Mirpur, AJ&K, Pakistan
fayalamin27@gmail.com

Abstract—Acceleration coefficients are the key parameters of position of a particle is determined by its own best position
particle swarm optimization (PSO) algorithm used to control the while the global best position is determined by its best
movement of particles by modifying its cognitive and social position found in the swarm. Many CPSO variants have been
components. Several variants have been proposed that modify proposed to solve continues optimization problems. However,
the acceleration coefficients to improve the convergence speed of it is ineffective for solving binary optimization problems.
PSO in continuous search space. In this regard, a few attentions
have been paid to improve the convergence speed of binary To solve the binary optimization problems, Kennedy and
particle swarm optimization (BPSO). Moreover, in presence of Eberhart proposed a BPSO [2] in 1997. The BPSO represents
distinct position of particles by ignoring the dispersion of each particle with a binary string of 0 or 1. In BPSO, each
particles in a search space, BPSO deals all particles equally. To particle changes its position by either selecting 0 or 1. BPSO
address this issue, we have proposed a fitness-based acceleration has been also successfully utilized to solve many binary
coefficients Novel BPSO, called FAC-NBPSO. In the proposed problems.
algorithm, the fitness of each particle is used to modify the
cognitive and social components of each particle. The Acceleration coefficients are the key parameters of CPSO
performance of the proposed algorithm is tested on four that significantly affect its performance [3]. A number of
benchmark test functions. The experimental results show that the modifications in acceleration coefficients have been proposed
proposed algorithm performs better than the compared to enhance the performance of CPSO. Zhengjia & Jianzhong
algorithm with improved convergence speed. [4] proposed a novel self-adaptive strategy for individual
inertia weight and social acceleration coefficient. An adaptive
Keywords—PSO; BPSO; convergence speed; acceleration PSO has been proposed by Tang and Zhang [3], it introduced
coefficients. exponential time-varying acceleration coefficients by
reduction of the cognitive component and by the increase of
I. INTRODUCTION the social component. In [5], the acceleration coefficients
changes dynamically over iterations. Ren [6] proposed a
In recent years, an increasing number of complex
fitness feedback based by adjusting the inertia weight and
optimization problems have emerged in various fields of
acceleration coefficients dynamically in a non-deterministic
science and technology. To solve optimization problems,
way. Ma et al. [7] proposed a modified PSO with dynamic
researchers have proposed a number of evolutionary
acceleration coefficients. A new self-adaptive PSO by
algorithms. Kennedy and Eberhart [1] introduced continues
defining acceleration coefficients in terms of fitness was
particle swarm optimization (CPSO), inspired by the social
proposed by Dong et al. [8].
behavior of birds flock and fish schools. In CPSO, the
individual in the swarm represents a particle, which holds a In this paper, we have proposed a fitness-based
potential solution to a problem. The CPSO executes by acceleration coefficients novel BPSO (FAC-NBPSO)
initializing the particles randomly in the search space. All the algorithm to improve the performance of NBPSO [9]. The
particles in the swarm communicate with each other and proposed algorithm modified the cognitive and social
adjust their positions. On every iteration, the personal best components of velocity update equation on the basis of fitness

978-1-5386-9355-1/18/$31.00 ©2018 IEEE 355


DOI 10.1109/FIT.2018.00069
of each particle proposed in [8] to enhance the convergence To cope up with the deficiencies of BPSO We et al [17]
speed of NBPSO. presented a BPSO-SVM algorithm with better mutation
mechanism.
The structure of the remaining paper is as follow: Section
II presents the Binary PSO. Section III presents a brief
description of Novel BPSO. Section IV introduces the III. NOVEL BINARY PSO (NBPSO)
proposed FAC-NBPSO. Section V presents the experimental NBPSO has been proposed by Khanesar et al [9] by
setup. The experimental results are presented in Section VI. interpreting velocity change in bits of particles and solve the
Additional experiments are carried out in Section VII and problem of selecting a proper value of inertia weight in
finally, the paper is concluded in Section VIII. existing versions of BPSO. In the swarm of the population,
pbest and gbest are calculated as same as in BPSO. In NBPSO
II. BINARY PSO velocity for each particle is interpreted as the rate of change of
bits.
In Binary PSO (BPS) [2], the real numbers are represented
as bits string in the binary search space. The personal best The velocity of each particle is changed by introducing
position (pbest) and the global best position (gbest) in BPSO two vectors V0 and V1. The V0 is defined as the chance of
are calculated as same as in CPSO. The difference between particle’s bits to 0 while V1 is defined as the chance of
CPSO and BPSO is that the particles’ velocity is defined as particle’s bits to 1. So the velocity of each particle is changed
the probability change taken as 0 or 1 instead of real numbers. as:
The velocity is calculated by using equation (1). v ij1 = wvij1 + d1ij1 + d 2ij1 . (4)
vid
( t +1) = wv
. id
(t ) + c1Rd1. p ( id
best( t)
id
−x ( t) )+ v ij 0 = wv ij 0 + d1ij 0 + d 2ij 0 . (5)
(1) Where w is the inertia weight and d1, d0 are temporary values.
(
c2Rd2. gidbest(t) − xid(t) . ) Then NBPSO calculates the velocity of change of bits as:
Here Rd1 and Rd2 are random numbers between 0 and 1, w is
°­ v , i f x = 0 .
i j1 ij
the inertia weight, and c1 and c2 are constants known as
acceleration coefficients. vci j = ® i j 0 ij
(6)
°̄ v , i f x = 1 .
BPSO used a sigmoid function (Sg) to map the velocity to
the probability in the range of [0, 1]: The NBPSO calculates the new position of each particle
by generating random variable Rdij in the range of [0, 1] as:
1
Sg = − v ij ( t )
. (2) ­° xij ( t ) , if Rd i j < Sg .
1+ e x ( t + 1) = ®
ij
(7)
°̄ x ( t ) , if Rdij < V .
ij
Particle’s new position based on equation (2) is calculated as:

xij ( t + 1 ) = °®
­ 1, if Rdij < Sg ( v( ) ) .
ij
t +1
(3) IV. FITNESS-BASED ACCELERATION COEFFICIENTS NBPSO
°̄ 0, else. (FAC-NBPSO) ALGORITHM
The problem with NBPSO algorithm [9] is that it uses
where Rd is a random number chosen for the range of [0, 1]. fixed acceleration coefficients for all particles in a swarm.
In recent years, a number of BPSO variants have been This leads the swarm to converge slowly. To overcome this
proposed to achieve better performance and to overcome the problem of NBPSO, we have proposed the FAC-NBPSO
convergence problems associated with BPSO. Nezamabadi et algorithm, which inserts the acceleration coefficients [8] to
al. [10] proposed a new BPSO by defining probability accelerate the convergence speed of the NBPSO. Based on the
functions to change velocity and position. Cervantes et al. fitness of each particle the proposed FAC-NBPSO algorithm
[11] resolved classification. A novel BPSO has been proposed accelerate the convergence speed of the NBPSO by modifying
[9] by interpreting velocity and solve the problem of selecting the cognitive and social components.
a proper value of inertia weight. A modified BPSO has been The purpose of increasing c1 and decreasing c2 for each
proposed by Lee et al [12], by adopting the concepts of particle with higher fitness in population is to enhance the
genotype-phenotype and mutation of a genetic algorithm to convergence of NBPSO.
optimize binary problems. Jeong et al. [13] proposed a
Quantum-inspired BPSO to address unit commitment The FAC-NBPSO modifies c1 and c2 adaptively for each
problems by applying quantum computing concepts. An particle in terms of fitness rank for each particle as:
experiment was performed by Singh et al [14] to address
c1 = 1.5 − (1 − FR ) .
2
discrete optimization problems with a hybrid version of BPSO (8)
and genetic crossover. Zhang et al [15] developed a BPSO c2 = 0.01∗ (1 − FR ) .
3

combined with EAs based on feature selection using ENN


classifier. In [16] Zabidi et al presented a comparison between Where the ‘FR’ is the fitness rank, 1.5, 0.01 and 1 are the
BPSO and Binary Ant Bee Colony to solve feature selection. empirical constants. Before modifying the acceleration

356
coefficients, the FAC-NBPSO sorts all the particles in the V. EXPERIMENTAL SETUP
population with respect to their fitness and then a fitness rank The benchmark test functions are utilized to validate and
is assigned to each particle. The FAC-NBPSO first ranks the compare the characteristics and performance of optimization
particle having greater fitness value and then later rank the algorithms such as convergence rate, robustness, precision,
particles with smaller fitness value. This improved the and general performance. To evaluate the performance of
convergence speed. FAC-NBPSO algorithm based on the fitness of each particle
The procedure of FAC-NBPSO is given below: by modifying is the cognitive and social components
measured by comparing it with NBPSO algorithm. The
I. Initialize the particle swarm with random positions experiments are carried out on minimization of test functions.
within the hypercube (particles are selected having
binary values 0, 1) The dimensions are represented as N, domain size of the
II. Calculate the fitness of every particle utilizing its problem is denoted as lb ” xi ” ub and optimal solution are
present position. denoted by f(x‫ = )כ‬f(x1,...xn), respectively. Where lb is the
III. Get pbest by the comparison of the fitness of each lower bound of the variables while, ub represent the upper
particle to its best fitness. Set the present position as bound of the variables. Four benchmark functions with zero
the gbest position if the fitness value at the current global minimum used to perform experiments on FAC-
position is better than its best position. NBPSO are:
IV. Get gbest by comparison of fitness of each particle to I. Sphere function
its best fitness within the population. Set the present
position as the best position if the fitness value at the N

present position is better than its best position. f1( x ) = ¦ xi2 . (9)
i =1
V. Sort and rank all the particles according to their
fitness values. The above function is the continuous, differentiable, Non-
VI. Calculate the acceleration coefficients for each separable, scalable, Multimodal function, where the domain
particle in terms of fitness using equation (8). size of problem is 0” xi ” 10. The optimal solution is x‫= כ‬
VII. Update particle’s velocity using equation (4) and (5). f(0,··· ,0), f(x‫ = )כ‬0.
VIII. Update the velocity of change of bits by the use of
equation (6). II. Rosenbrock function
IX. The new position of each particle is updated by
generating a random variable in the range of [0, 1]
using equation (7).
(
f 2 ( x ) = ¦ 100 ( xi +1 − xi2 ) + ( xi − 1 )
2 2
). ( 10 )
X. Move to step II and repeat till convergence occurs. The Rosebrock is the continuous, differentiable, Non-
separable, scalable unimodal function, where the domain size
To elaborate the algorithm more clearly its flow chart is given of the problem is í30 ” xi ” 30. The optimal solution is at x‫= כ‬
below: f (1,··· ,1), f(x‫ = )כ‬0.

III. Griewangk function


1 N 2 N x
f3 ( x ) = ¦ xi − ∏
4000 i =1 i =1
cos i + 1 .
i
( 11 )
Third function is Continuous, Di erentiable, Non-
Separable, Scalable, multimodal function with problem
domain size of í100 ” xi ” 100. The optimal solution is x‫ = כ‬f
(0,··· ,0), f(x‫ = )כ‬0.
IV. Rastrigin function
N
f 4 ( x ) = ¦ ( xi2 − 10 cos ( 2π xi ) + 10 ) . ( 12 )
i =1
Rastrigin is a highly multi-modal function with domain size of
problem í5 ” xi ” 12 the global minima is f(x*) = 0, at x* =
(0, …, 0)
For the above-mentioned benchmark functions, N
represents the dimension size of search space assumed as 3, 5
and 10. The chosen population size is set as 100 used for 1000
iterations with the range of particles [-50, 50] and 20 bits are
used for the representation of binary values of real numbers.
Table I. shows the parameter settings used for the
algorithms applied in the study.
Fig1. FAC-NBPSO Algorithm

357
TABLE I. PARAMETER SETTINGS
Dimension FAC-NBPSO NBPSO[6] BPSO[6] BPSO[6]
Sr.no Algorithm Parameter Value N=3 2.0860*10-9 2.0860*10-9 0.003 0.0277
w 0.5
-3
1 FAC-NBPSO c1 1.5-(1-FR)2 N=5 7.4*10 0.0099 0.2113 0.1503
c2 0.01*(1-FR)3
N = 10 2.0860*10-9 0.0519 0.8282 1.0254
w 0.5
2 NBPSO
c1 & c2 1.0

w 0.9 to 0.1
3 BPSO TABLE V. EXPERIMENTAL RESULTS FOR GLOBAL BEST USING
c1 & c2 2
RASTRIGIN FUNCTION FOR MINIMIZATION IN 10 RUN OF THE ALGORITHM

Dimension FAC-NBPSO NBPSO[6] BPSO[6] BPSO[6]


-9 -6
N=3 4.5109*10 1.3533*10 0.1716 0.2025
VI. EXPERIMENTAL RESULTS
N=5 4.5109*10-9 0.0034 0.5824 0.6574
This section presents the experimental results of the
proposed algorithm obtained by comparing its results with the N = 10 4.5109*10-9 10.3925 0.5824 1.4333

results of NBPSO [9]. Each experiment is conducted for 10


runs on every function over 1000 iterations. The best value of The comparison of the experimental results shown in the
global best has been chosen to form the mean of global best table (II-V) illustrates the performance of the proposed FAC-
obtained by executing all the algorithms in 10 runs over 1000 NBPSO algorithm is quite adequate and much superior to the
iterations on four benchmark test functions mentioned in the results of [9]. The comparison of experimental results is also
previous section. presented in Graphical representation Fig (II-IV).
The experimental results presented in Table II concluded
that the proposed FAC-NBPSO outperforms as compared to VII. ADDITIONAL EXPERIMENTS & DISCUSSION
other algorithms in [9] and improved the convergence speed
This section further explains to evaluate the convergence
for Sphere function.
speed of the proposed FAC-NBPSO algorithm more
The next experiment carried out on second test function. experiments were tested. The experiments carried out on a
From the results presented in Table III, it can be clearly seen proposed FAC-NBPSO algorithm based on four test functions
that the proposed FAC-NBPSO for Rosenbrock function is iterations from 1000 to 50. Each experiment is conducted for
only improving convergence speed when N= 10 as compared 10 runs on every function over 500, 200, 100 and 50 iterations
to other algorithms in [9]. with 3, 5 and 10 dimensions.
Table IV presents the results of the experiment conducted The first experiment carried out on dimension 3 for all
on Grienwangk function concluded that the proposed FAC- four test function with 500, 200, 100 and 50 iterations. The
NBPSO accelerates convergence speed when N=3 and N=10 experimental results are shown in table VI when compared
as compared to other algorithms in [9]. with the results of 1000 iterations presented in table II-V
The experimental results presented in Table V concluded demonstrate that the convergence speed of proposed FAC-
that the proposed FAC-NBPSO quiet efficiently accelerate the NBPSO reduces for all test functions as we reduce the number
convergence speed as compared to other algorithms in [9].for of iterations.
Rastrigin function.
TABLE VI. EXPERIMENTAL RESULTS FOR GLOBAL BEST-USING
TABLE II. EXPERIMENTAL RESULTS FOR GLOBAL BEST-USING SPHERE FUNCTIONS WITH DIFFERENT ITERATION IN 10 RUN OF THE FAC-NBPSO
ALGORITHM
FUNCTION FOR MINIMIZATION IN 10 RUN OF THE ALGORITHM

Dimension FAC-NBPSO NBPSO[6] BPSO[6] BPSO[6] N=3 Iterations

N=3 6.8212*10-9 6.8212*10-9 0.0561 0.1545 Functions 500 200 100 50


-9 -9 -9
Sphere 6.8212*10 6.8212*10 6.8212*10 1.7531*10-6
N=5 1.369*10-8 1.9213*10-6 7.9578 22.8995
Rosenbrock 0.17747 1.8534 1.7569 1.5824
N = 10 2.737*10-8 0.1121 216.6069 394.7066
Grienwank 6.912*10-3 7.108*10-3 7.952*10-3 9.932*10-3

TABLE III. EXPERIMENTAL RESULTS FOR GLOBAL BEST USING Rastrigrin 4.5109*10-7 4.5109*10-7 4.5109*10-7 4.5109*10-7
ROSENBROCK FUNCTION FOR MINIMIZATION IN 10 RUN OF THE ALGORITHM

Dimension FAC-NBPSO NBPSO[6] BPSO[6] BPSO[6] The next experiment carried out on dimension 5 for all
N=3 0.1875 0.5164 0.9384 0.8645 four test functions with 500, 200, 100 and 50 iterations. The
experimental results are shown in table VII when compared
N=5 1.2275 2.5162 1406 3746.5
with the results of 1000 iterations presented in table I-IV
N = 10 8.8127 367.83 1.3094*10 6
1.52321*106 demonstrate that the convergence speed of proposed FAC-
NBPSO algorithm reduces for all test functions as we reduce
the number of iterations.
TABLE IV. EXPERIMENTAL RESULTS FOR GLOBAL BEST USING
GRIENWANGK FUNCTION FOR MINIMIZATION IN 10 RUN OF THE ALGORITHM

358
TABLE VII. EXPERIMENTAL RESULTS FOR GLOBAL BEST-USING their fitness by modifying the cognitive and social
FUNCTIONS WITH DIFFERENT ITERATION IN 10 RUN OF THE FAC-NBPSO
ALGORITHM
components. The experiments are performed on four
benchmark test functions to evaluate the performance of FAC-
N=5 Iterations NBPSO. The findings affirmed the improved performance of
Functions 500 200 100 50
the proposed FAC-NBPSO than the compared algorithms in
terms of convergence speed. To validate the improved
Sphere 1.1369*10-8 1.1369*10-8 2.9559*10-8 7.7603*10-6
convergence speed of FAC-NBPSO, additional experiments
Rosenbrock 2.7871 3.2948 4.4074 10.0468 are carried out for four benchmark test functions on different
Grienwank 0.0100 0.0252 0.0103 0.0621
iterations. The experimental results demonstrated that FAC-
NBPSO performed better not just on 1000 iterations, it still
Rastrigrin 4.5109*10-7 4.5109*10-7 4.5109*10-7 4.5109*10-7
improves convergence speed when the iterations are reduced
from 1000 to 500, 200, 100, and 50.
The next experiment carried out on dimension 10 for all
four test functions with 500, 200, 100 and 50 iterations. The REFERENCES
experimental results are shown in table VIII when compared
with the results of 1000 iterations presented in table I-IV [1] J. Kennedy and R. Eberhart, "PSO optimization," in Proc. IEEE Int.
Conf. Neural Networks, 1995, pp. 1941-1948.
demonstrate that the convergence speed of proposed FAC- [2] J. Kennedy and R. C. Eberhart, "A discrete binary version of the particle
NBPSO algorithm reduces for all test functions as we reduce swarm algorithm," in Systems, Man, and Cybernetics, 1997.
the number of iterations. Computational Cybernetics and Simulation., 1997 IEEE International
Conference on, 1997, pp. 4104-4108.
TABLE VIII. EXPERIMENTAL RESULTS FOR GLOBAL BEST-USING [3] T. Ziyu and Z. Ding Xue, "A modified particle swarm optimization with
FUNCTIONS WITH DIFFERENT ITERATION IN 10 RUN OF THE FAC-NBPSO adaptive acceleration coefficients," in Information Processing, 2009.
ALGORITHM APCIP 2009. Asia-Pacific Conference on, 2009, pp. 330-332.
[4] Z. Wu and J. Zhou, "A self-adaptive particle swarm optimization
N = 10 Iterations algorithm with individual coefficients adjustment," in Computational
Intelligence and Security, 2007 International Conference on, 2007, pp.
Functions 500 200 100 50
133-136.
Sphere 2.2737*10-8 4.0609*10-6 0.0105 1.6026 [5] C. Banerjee and R. Sawal, "PSO with dynamic acceleration coefficient
based on multiple constraint satisfaction: Implementing Fuzzy Inference
Rosenbrock 8.7830 8.8185 14.1053 476.3454
System," in Advances in Electronics, Computers and Communications
Grienwank 0.0148 0.0718 0.0651 0.2358 (ICAECC), 2014 International Conference on, 2014, pp. 1-5.
[6] R. Huifeng, X. Jun, and H. Guyu, "Fitness feedback based particles
-7 -7 -7
Rastrigrin 4.5109*10 4.5109*10 4.5109*10 4.5109*10-7 swarm optimization," in Control Conference (CCC), 2015 34th Chinese,
2015, pp. 2673-2677.
[7] G. Ma, R. Gong, Q. Li, and G. Yao, "An Improved Particle Swarm
The experimental results of the proposed FAC-NBPSO Optimization Algorithm with Dynamic Acceleration Coefficients,"
algorithms shown in Table VI-VIII illustrates that particles Bulletin of Electrical Engineering and Informatics, vol. 5, pp. 489-494,
2016.
with 1000 iterations converge quickly and efficiently because [8] C. Dong, Z. Chen, and S. Sun, "The Acceleration Coefficients Self-
they get a higher chance to explore the search space. While Adapting In PSO," International Journal of Digital Content Technology
reducing iterations from 1000, 500, 200, 100 and to 50, the and its Applications, vol. 7, p. 672, 2013.
convergence speed reduces as there is less chance for the [9] M. A. Khanesar, M. Teshnehlab, and M. A. Shoorehdeli, "A novel
particles to explore the search space. The comparison of binary particle swarm optimization," in Control & Automation, 2007.
MED'07. Mediterranean Conference on, 2007, pp. 1-6.
experimental results is also presented in graphical [10] H. Nezamabadi-pour, M. Rostami-Shahrbabaki, and M. Maghfoori-
representation shown in Fig (V-VII). It is concluded that Farsangi, "Binary particle swarm optimization: challenges and new
based on the fitness the proposed algorithm FAC-NBPSO first solutions," CSI J Comput Sci Eng, vol. 6, pp. 21-32, 2008.
ranked the particle having greater fitness value and then later [11] A. Cervantes, I. M. Galván, and P. Isasi, "Binary particle swarm
ranked the particles with smaller fitness value. The proposed optimization in classification," 2005.
FAC-NBPSO algorithm attains great improvement over [12] S. Lee, S. Soak, S. Oh, W. Pedrycz, and M. Jeon, "Modified binary
particle swarm optimization," Progress in Natural Science, vol. 18, pp.
NBPSO by modifying the cognitive and social components. 1161-1166, 2008.
This improved the convergence speed and contributes a lot to [13] Y.-W. Jeong, J.-B. Park, S.-H. Jang, and K. Y. Lee, "A new quantum-
improve the performance of the NBPSO algorithm [6]. inspired binary PSO: application to unit commitment problems for
Simultaneously, the uniformity of convergence speed power systems," IEEE Transactions on Power Systems, vol. 25, pp.
enhances the robustness and stability of the FAC-NBPSO 1486-1495, 2010.
[14] D. Singh, V. Singh, and U. Ansari, "Binary particle swarm optimization
algorithm. with crossover operation for discrete optimization," International
Journal of Computer Applications, vol. 28, pp. 15-20, 2011.
VIII. CONCLUSION [15] Zhang, N., Xiong, J., Zhong, J., & Thompson, L. (2018, June). Feature
Selection Method Using BPSO-EA with ENN Classifier. In 2018 Eighth
In our work, a Fitness-based Acceleration Coefficient International Conference on Information Science and Technology
strategy is introduced in Novel BPSO to enhance its (ICIST) (pp. 364-369). IEEE.
convergence speed. Unlike NBPSO where the fixed [16] Zabidi, A., Yassin, I. M., Tahir, N. M., Rizman, Z. I., & Karbasi, M.
(2017). Comparison between binary particles swarm optimization
acceleration coefficients are adopted for all particles that (BPSO) and binary artificial bee colony (BABC) for nonlinear
ignore the dispersion of particles in a search space, in the autoregressive model structure selection of chaotic data. Journal of
proposed FAC-NBPSO all particles are ranked according to Fundamental and Applied Sciences, 9(3S), 730-754.

359
[17] Wei, J., Zhang, R., Yu, Z., Hu, R., Tang, J., Gui, C., & Yuan, Y. (2017).
A BPSO-SVM algorithm based on memory renewal and enhanced
mutation mechanisms for feature selection. Applied Soft Computing, 58,
176-192.

Fig V. EXPERIMENTAL RESULTS FOR GLOBAL BEST-USING


FUNCTIONS WITH DIFFERENT ITERATION IN 10 RUN OF FAC-
NBPSO ALGORITHM WHEN N=3

Fig II. EXPERIMENTAL RESULTS FOR GLOBAL BEST-USING


FUNCTIONS IN 10 RUN OF FAC-NBPSO ALGORITHM WHEN N=3

Fig VI. EXPERIMENTAL RESULTS FOR GLOBAL BEST-USING


FUNCTIONS WITH DIFFERENT ITERATION IN 10 RUN OF FAC-
NBPSO ALGORITHM WHEN N=5

Fig III. EXPERIMENTAL RESULTS FOR GLOBAL BEST-USING


FUNCTIONS IN 10 RUN OF FAC-NBPSO ALGORITHM WHEN N=5

Fig VII. EXPERIMENTAL RESULTS FOR GLOBAL BEST-USING


FUNCTIONS WITH DIFFERENT ITERATION IN 10 RUN OF FAC-
NBPSO ALGORITHM WHEN N=10

Fig IV. EXPERIMENTAL RESULTS FOR GLOBAL BEST-USING


FUNCTIONS IN 10 RUN OF FAC-NBPSO ALGORITHM WHEN N=10

360

You might also like