You are on page 1of 14

Plagiarism Checker X Originality Report

Similarity Found: 15%

Date: Monday, June 05, 2023


Statistics: 560 words Plagiarized / 3702 Total words
Remarks: Low Plagiarism Detected - Your Document needs Optional Improvement.
-------------------------------------------------------------------------------------------

Title: Particle Swarm Optimization for technique for adaptive filters: A Comprehensive
Review Abstract: Adaptive equalization plays a crucial role in mitigating distortions and
compensating for frequency response variations in communication systems. It aims to
enhance signal quality by adjusting the characteristics of the received signal.

Particle Swarm Optimization (PSO) algorithms have shown promise in optimizing the tap
weights of the equalizer. However, there is a need to further enhance the optimization
capabilities of PSO to improve the equalization performance. This paper provides a
comprehensive study on variants of and combining PSO with other optimization
algorithms to achieve better convergence, accuracy, and adaptability.

Traditional PSO algorithms often suffer from high computational complexity and slow
convergence rates, limiting their effectiveness in solving complex optimization
problems. To address these limitations, this paper proposes a set of techniques aimed at
reducing the complexity and accelerating the convergence of PSO. Introduction Particle
Swarm Optimization (PSO) is a technique used in computational optimization inspired
by the swarm’s collective behavior.

Originally at first it was proposed by Kennedy and Eberhart in 1995 and from then
became a effective and popular method for various optimization problems solutions[1].
PSO simulates swarm of particle’s social behavior, where each particle is representing a
potential solution in the search space[2]. The particles adjust their positions based on
their own experience, move through the search space, and the experiences of their
neighboring particles.

Finding the optimal solution is main objective by iteratively updating the positions of
the particles in search of better solutions. In the beginning a population of particles
randomly within the search space is initialized by the PSO algorithm[3]. Each particle has
a velocity vector and position vector determining its movement in the search space.

Each particle has to achieve its best position to achieve its goal, called "personal best",
which is representing the best solution so far. Later, standard PSO will in detail be
discussed to show the working of PSO algorithm[4]. In each iteration, the particles
evaluate their positions using an objective function, which quantifies the quality of a
particular solution in terms of local best and global best positions.

On the basis of this evaluation, each particle does comparison of its current position
with its local best, if a better solution is found by updating its local best. The particle
also compares its local best known as ‘pbest’ with the global best position found by any
particle in the swarm. If a better solution regarding global best is found, the global best
known as ‘gbest’ is accordingly updated.

On the basis of global best and local best the velocity and position of particle moved to
best solution. Applications Due to modification and advancement in PSO, in literature[5]
various applications have been found due to its ability to search efficiently for optimal
solutions. PSO can be used to optimize mathematical or numerical functions with more
than one variable.

By exploring the search space, particles can locate the global maximum and minimum of
a function. Need of this application is mostly in fields such as engineering related
design, data analysis, and financial modeling. In Image and Signal Processing PSO has
been employed for image and signal processing tasks[6].

Parameters can be optimized in it in image reconstruction, denoising, feature extraction,


and object recognition. Promising results are extracted using PSO algorithms in
optimizing parameters for image and signal processing techniques, lifting up the quality
and efficiency of these processes[7]. To train the weights and biases of neural networks
PSO can be used.

As an alternative to traditional optimization algorithms it has been employed, such as


backpropagation, training process improvement and avoid local optima. Training
algorithms based on PSO can enhance the accuracy and convergence speed of neural
networks, improving their effectiveness in pattern recognition, classification, and
prediction tasks. Optimization problems can be solved by its use in power systems[5].

It can optimize various aspects such as power flow, unit commitment, economic
dispatch, and capacitor placement. PSO-based approaches enable efficient utilization of
power resources, leading to improved power system operation, reduced costs, and
enhanced stability. It can be used for feature selection in machine learning and data
mining tasks.

By selecting a subset of relevant features, PSO helps in dimensionality reduction,


improving classification accuracy, and reducing computational complexity. This
application is particularly useful in areas such as text mining, bioinformatics, and image
recognition[8]. It is also used to optimize vehicle routing problems, including route
planning, delivery scheduling, and fleet management.

By considering factors such as distance, capacity, and time constraints, PSO algorithms
can determine efficient routes and schedules, minimizing transportation costs and
improving logistics operations. Another application of PSO in electronics is in the
optimization of antenna design[9]. Antennas are crucial components in wireless
communication systems, and their performance greatly impacts signal reception and
transmission.

Moreover, in radar waveform design, where PSO can optimize the characteristics of
radar waveforms, such as pulse duration, modulation schemes, and frequency
characteristics, to enhance target detection, resolution, and interference mitigation. By
iteratively adjusting particle positions representing waveform parameters, PSO can
efficiently explore the design space and converge on optimal solutions that maximize
radar performance.

This enables radar systems to improve their capabilities in detecting and tracking
targets, reducing interference, and enhancing overall operational efficiency. Time and
space complexity of PSO In terms of time complexity, the main computational cost of
PSO lies in evaluating the objective function for each particle in each iteration. The
objective function represents the problem to be optimized and can vary in complexity
depending on the problem domain.

Therefore, the time complexity of PSO is closely related to the evaluation time of the
objective function. In each iteration, all particles need to evaluate their positions, update
their personal bests and the global best, and adjust their velocities and positions. This
process continues until a termination condition is met.

The number of iterations required for convergence depends on various factors such as
the problem complexity, the size of the search space, and the convergence speed of the
swarm[10]. Generally, the time complexity of PSO is considered to be moderate, as it
typically requires a reasonable number of iterations to converge to an acceptable
solution[11].

The greater number of iterations leads to the requirement of large memory. PSO
requires memory to store the positions, velocities, personal bests, and global best of
each particle in the swarm. The amount of memory required is proportional to the
population size, which is typically determined by the problem being solved.

PSO may also require memory to store auxiliary variables, such as acceleration
coefficients and parameters controlling the swarm behavior. The space complexity of
PSO is therefore determined by the memory requirements for storing the swarm's state
and other relevant variables. The space complexity is generally considered to be
reasonable, as it scales linearly with the population size and does not depend on the
size of the search space.

Variations of Particle Swarm Optimization PSO is an optimization method inspired by


swarm behavior observed in nature, where a population of particles represents the
optimization parameters[12]. These particles collectively search for optimal solutions
within a multi-dimensional search space. The objective of the algorithm is to converge
towards the best possible values for each parameter.

The fitness of each particle is evaluated using a fitness function, which quantifies the
quality of the particle's solution estimate. Each particle maintains two state variables: its
position (x(i)) and velocity (v(i)), where "i" represents the iteration index. The fitness of a
particle is determined by evaluating a cost function associated with its solution estimate.

Through information sharing, each particle combines its own best solution with the best
solution found by the entire swarm, adjusting its search pattern accordingly. This
iterative process continues until an optimal solution is reached or a termination criterion
is met. / The equation represents the velocity and position update mechanism in PSO.

The velocity is updated by combining the particle's previous velocity, the cognitive
component based on its personal best solution, and the social component based on the
global best solution found by the swarm. This combination allows the particle to
maintain its momentum, explore its individual best solution, and be influenced by the
overall best solution.

The updated velocity is then used to update the particle's position, determining its next
location in the search space. The updated version of PSO having an inertia term is given
below: / where, c1 represents cognitive term and c2 represents social term. The d is the
dimension of particles. ?? ?? is the local best and ?? is the global best of particle.

?? 1 and ?? 2 are the random variables and its range lies in between 0 and 1[13]. While
the momentum of particle is control by inertia represented by ??. When inertia of the
particle is zero the model will only explore and become independent of past values. The
convergence rate of the PSO algorithm refers to the speed at which the algorithm
converges towards an optimal solution[14].

The convergence rate of PSO can be influenced by various factors, including the
problem complexity, population size, inertia weight, acceleration coefficients, and
termination conditions[15]. The flow chart of standard PSO algorithm is mentioned in
[16] is as follows. / PSO has the potential for fast convergence due to its ability to share
information among particles in the swarm[9].

The collective knowledge sharing enables particles to converge towards promising


regions of the search space[17]. However, the convergence rate of standard PSO can be
affected by the balance between exploration and exploitation. If the exploration is too
dominant, the algorithm may take longer to converge.

On the other hand, if the exploitation is too dominant, the algorithm may converge
prematurely to local optima. To enhance the convergence rate, several strategies can be
employed[6]. One approach is to adaptively adjust the parameters of the algorithm
during the optimization process[18].

This includes modifying the inertia weight and acceleration coefficients to balance
exploration and exploitation at different stages of the optimization[19]. Different
variants of PSO algorithms exist in literature to achieve better complexity and faster
convergence. / Recent Advancements in PSO algorithm Ring topology in Particle Swarm
Optimization Ring topology in Particle Swarm Optimization is a variation of the
algorithm where the particles are arranged in a circular ring structure instead of a fully
connected network.

In this topology, each particle is only connected to its immediate neighbours, creating a
cyclic structure [20]. In the ring topology, the communication and information sharing
among particles are limited to the adjacent neighbours. This arrangement allows for a
more localized interaction, as each particle only exchanges information with its
neighbours rather than the entire swarm.

The neighbour particles influence the velocity and position updates of a given particle.
During each iteration, the particle evaluates its fitness based on the objective function
and updates its personal best position. It then exchanges information with its
neighbours, considering both its own best position and the best position found by its
neighbours.

The particle's velocity and position are updated using the information obtained from
these local interactions. In the ring topology, the updating equation for velocity and
position remains the same as in the standard PSO algorithm. However, the difference
lies in the information sharing process, where particles only consider the personal best
position and the best position of their adjacent neighbours when updating their velocity
and position.

Dynamic Multi-Swarm Particle Swarm Optimization Dynamic Multi-Swarm Particle


Swarm Optimization is an extension of the standard PSO algorithm that incorporates the
concept of multiple swarms to improve optimization performance in dynamic
environments. [21] Unlike the standard PSO, where all particles belong to a single
swarm, dynamic multi-swarm PSO divides the population into multiple subgroups or
swarms, each with its own characteristics and behavior [22]. In this approach, each
swarm operates independently, with its own set of particles exploring the search space.

The swarms can be formed based on different criteria, such as spatial division or
clustering techniques[20]. Each swarm maintains its own local best positions (pbest) and
global best position (gbest), representing the best solutions found within the swarm and
the overall best solution obtained among all swarms, respectively. The use of multiple
swarms in dynamic multi-swarm PSO provides several advantages[3].

It allows for a more distributed exploration of the search space, enabling the algorithm
to better handle dynamic changes and avoid being trapped in local optima. The
dynamic reconfiguration of swarms facilitates adaptation to changes and improves the
algorithm's robustness and responsiveness. Fully Informed Particle Swarm Optimization
(FIPS) 3.1.3

Fully Informed Particle Swarm Optimization (FIPS) enhances the communication and
information exchange among particles within a swarm[23]. In FIPS, each particle not only
considers its personal best solution (pbest) and the global best solution (gbest) but also
incorporates information from the best solutions of its neighboring particles.

This fully informed approach allows for a more comprehensive exploration of the search
space and can lead to improved optimization performance. Communication topology
can take different forms, such as a fully connected network or a spatially defined
neighborhood structure. Each particle maintains a set of information about its
neighbors' best solutions, known as the "flock" information.

This flock information consists of the position and fitness values of the neighboring
particles' best solutions. The fully informed communication mechanism in FIPS facilitates
cooperative interactions among particles, enabling them to share valuable information
and guide each other towards promising regions in the search space.

This enhanced communication promotes a balance between exploration and


exploitation, helping to avoid premature convergence to suboptimal solutions. Dynamic
neighborhood in Particle Swarm Optimization Dynamic neighborhood in Particle Swarm
Optimization is an adaptive mechanism that allows particles to adjust their
communication network or neighborhood structure during the optimization process[24].

Unlike the traditional fixed neighborhood approach, where each particle interacts with a
fixed set of neighbors, dynamic neighborhood enables particles to dynamically form and
update their local neighborhoods based on the problem dynamics or specific
optimization requirements. The neighborhood structure is not predetermined but
evolves over time. Initially, particles are assigned to random neighborhoods or a
predefined initial configuration[20].

As optimization progresses, particles continuously evaluate their performance and


exchange information with their neighbors. Based on this information, particles may
reconfigure their neighborhoods by adding or removing neighboring particles, thereby
dynamically adjusting the communication network.

Hybridization in Particle Swarm Optimization (HPSO) Hybridization in Particle Swarm


Optimization (PSO) refers to the integration of PSO with other optimization techniques
or problem-specific heuristics to enhance its performance and overcome its
limitations[25] [26]. By combining the strengths of multiple algorithms, hybrid PSO aims
to achieve improved exploration and exploitation capabilities, increased convergence
speed, and better overall solution quality[27].

One common approach is to combine PSO with local search methods, such as gradient
descent or hill climbing, to refine the solutions obtained by PSO[28]. This combination
allows for a more thorough exploration of the search space and can help escape local
optima. hybrid PSO can be customized based on the specific characteristics of the
problem domain[6].

Problem-specific heuristics or knowledge can be incorporated to guide the search


process. Cooperative Particle Swarm Optimization (CPSO) Cooperative Particle Swarm
Optimization (PSO) is an extension of the standard PSO algorithm that promotes
collaboration and information sharing among multiple sub-swarms or groups of
particles [29].

In cooperative PSO, instead of having a single global best solution for the entire swarm,
each sub-swarm maintains its own local best solution, and particles from different sub-
swarms communicate and cooperate to collectively search for optimal solutions. The
sub-swarms operate independently, exploring different regions of the search space.

Periodically, particles exchange information about their best solutions with particles
from other sub-swarms [30]. This information sharing allows particles to gain insights
from successful regions discovered by other sub-swarms, promoting exploration beyond
local optima and facilitating a more thorough exploration of the search space.

Self-organizing Hierarchical Particle Swarm Optimization In this approach, particles are


organized into a hierarchical structure, where higher-level particles oversee the behavior
of lower-level particles. This hierarchical arrangement enables a more efficient search
process by coordinating the exploration of different regions of the search space.

It is an advanced variant of the standard PSO algorithm that introduces a hierarchical


organization among particles to enhance their exploration and exploitation capabilities.
It is an advanced variant of the standard PSO algorithm that introduces a hierarchical
organization among particles to enhance their exploration and exploitation capabilities.
The higher-level particles guide the lower-level particles based on their own experiences
and the information they receive from the lower-level particles.

The higher-level particles take on a supervisory role, adjusting the influence of different
components in the PSO equation to balance global and local search tendencies[30].
They communicate with and provide guidance to the lower-level particles, influencing
their search patterns and facilitating the discovery of promising regions. It allows for a
distributed and coordinated exploration, as different levels of particles focus on different
regions and scales within the search space[17], [27].

The hierarchical organization helps to mitigate the risk of premature convergence and
aids in escaping local optima by facilitating exploration in unexplored regions.
Comprehensive Learning Particle Swarm Optimization (CLPSO) Comprehensive Learning
Particle Swarm Optimization (CLPSO) is an advanced variant of the standard PSO
algorithm that introduces a comprehensive learning strategy.

In CLPSO, particles not only learn from their personal best and global best solutions but
also learn from other randomly selected particles within the swarm. This comprehensive
learning mechanism allows for a broader exploration of the search space and promotes
the exchange of valuable information among particles. By incorporating knowledge
from multiple sources, it enhances the diversity of search trajectories, facilitates the
discovery of new regions, and improves the convergence speed and solution quality.

The comprehensive learning strategy in CLPSO enables particles to make more informed
decisions during the optimization process, leveraging the collective intelligence of the
swarm to achieve better performance in solving complex optimization problems.
Comparative study Challenges and Limitations of Particle Swarm Optimization
Conclusion Future Directions and Research Opportunities References [1] O.

Djaneye-Boundjou, “Particle Swarm Optimization Stability Analysis,” 2013. [2] Y. Shi,


“PARTICLE SWARM OPTIMIZATION,” 2004. [3] N. Iwasaki, K. Yasuda, and G. Ueno,
“Particle Swarm Optimization: Dynamic parameter adjustment using swarm activity,”
2008 IEEE International Conference on Systems, Man and Cybernetics, pp. 2634–2639,
2008. [4] Z. Li-ping, Y. Huan-jun, and H.

Shang-xu, “Optimal choice of parameters for particle swarm optimization,” Journal of


Zhejiang University-SCIENCE A, vol. 6, pp. 528–534, 2005. [5] J. Xu, X. Yue, and Z. Xin, “A
real application of extended particle swarm optimizer,” 2005 5th International
Conference on Information Communications & Signal Processing, pp. 46–49, 2005.

[6] M. S. Arumugam and M. V. C. Rao, “On the improved performances of the particle
swarm optimization algorithms with adaptive parameters, cross-over operators and root
mean square (RMS) variants for computing optimal control of a class of hybrid systems,”
Appl. Soft Comput., vol. 8, pp. 324–336, 2008. [7] A. G.

Gad, “Particle Swarm Optimization Algorithm and Its Applications: A Systematic Review,”
Archives of Computational Methods in Engineering, vol. 29, pp. 2531–2561, 2022. [8] E.
H. Houssein, A. G. Gad, K. Hussain, and P. N. Suganthan, “Major Advances in Particle
Swarm Optimization: Theory, Analysis, and Application,” Swarm Evol. Comput., vol. 63, p.
100868, 2021. [9] M. R.

Bonyadi, “Particle swarm optimization: theoretical analysis, modifications, and


applications to constrained optimization problems,” 2015. [10] M. S. Sohail, M. O. Bin
Saeed, S. Z. Rizvi, M. Shoaib, and A. U.-H. Sheikh, “Low-Complexity Particle Swarm
Optimization for Time-Critical Applications,” ArXiv, vol. abs/1401.0546, 2014. [11] M. Asif,
M. A. Khan, S. Abbas, and M.
Saleem, “Analysis of Space & Time Complexity with PSO Based Synchronous MC-CDMA
System,” 2019 2nd International Conference on Computing, Mathematics and
Engineering Technologies (iCoMET), pp. 1–5, 2019. [12] M. S. Couceiro and P. Ghamisi,
“Particle Swarm Optimization,” 2016. [13] Y. Tuppadung and W. Kurutach, “Comparing
nonlinear inertia weights and constriction factors in particle swarm optimization,” Int. J.
Knowl. Based Intell. Eng. Syst., vol. 15, pp.

65–70, 2011. [14] Z. Zhan and J. Zhang, “Adaptive Particle Swarm Optimization,” IEEE
Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 39, pp. 1362–
1381, 2008. [15] N. Lynn and P. N. Suganthan, “Comprehensive learning particle swarm
optimizer with guidance vector selection,” 2013 IEEE Symposium on Swarm Intelligence
(SIS), pp. 80–84, 2013.

[16] H. S. Urade and R. Patel, “Study and Analysis of Particle Swarm Optimization: A
Review,” 2011. [17] S. Das, A. Konar, and U. K. Chakraborty, “Two improved differential
evolution schemes for faster global search,” in Annual Conference on Genetic and
Evolutionary Computation, 2005. [18] L. Lu, Q.-Y. Luo, J. Liu, and C. Long, “An improved
particle swarm optimization algorithm,” 2008 IEEE International Conference on Granular
Computing, pp. 486–490, 2008.

[19] Z. A. Ansari, “Performing Automatic Clustering Using Active Clusters Approach with
Particle Swarm Optimization,” 2015. [20] Y. Liu, Q. Zhao, Z. Shao, Z. Shang, and C. Sui,
“Particle Swarm Optimizer Based on Dynamic Neighborhood Topology,” in International
Conference on Intelligent Computing, 2009. [21] Z. Li, X. Liu, and X. Duan, “A Particle
Swarm Algorithm Based on Stochastic Evolutionary Dynamics,” 2008 Fourth
International Conference on Natural Computation, vol. 7, pp.

564–568, 2008. [22] J. J. Liang and P. N. Suganthan, “Dynamic multi-swarm particle


swarm optimizer,” Proceedings 2005 IEEE Swarm Intelligence Symposium, 2005. SIS
2005., pp. 124–129, 2005. [23] R. Mendes, J. Kennedy, and J. Neves, “The fully informed
particle swarm: simpler, maybe better,” IEEE Transactions on Evolutionary Computation,
vol. 8, pp. 204–210, 2004. [24] E. S. Peer, F.

van den Bergh, and A. P. Engelbrecht, “Using neighbourhoods with the guaranteed
convergence PSO,” Proceedings of the 2003 IEEE Swarm Intelligence Symposium. SIS’03
(Cat. No.03EX706), pp. 235–242, 2003. [25] A. A. Al-Shaikhi, A. H. Khan, A. T. Al-Awami,
and A. Zerguine, “A Hybrid Particle Swarm Optimization Technique for Adaptive
Equalization,” Arab J Sci Eng, vol. 44, no. 3, pp. 2177–2184, 2019, doi: 10.1007/s13369-
018-3387-8.
[26] C. Grosan and A. Abraham, “SEARCH OPTIMIZATION USING HYBRID PARTICLE SUB-
SWARMS AND EVOLUTIONARY ALGORITHMS,” 2005. [27] L. Guo and X. Chen, “A Novel
Particle Swarm Optimization Based on the Self-Adaptation Strategy of Acceleration
Coefficients,” 2009 International Conference on Computational Intelligence and Security,
vol. 1, pp. 277–281, 2009. [28] N. Sadati, M. Zamani, and H.

Mahdavian, “Hybrid Particle Swarm-Based-Simulated Annealing Optimization


Techniques,” IECON 2006 - 32nd Annual Conference on IEEE Industrial Electronics, pp.
644–648, 2006. [29] F. van den Bergh and A. P. Engelbrecht, “A Cooperative approach to
particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, pp.
225–239, 2004. [30] B. Niu, Y.

Zhu, and X. He, “Multi-population Cooperative Particle Swarm Optimization,” in


European Conference on Artificial Life, 2005.

INTERNET SOURCES:
-------------------------------------------------------------------------------------------
<1% - https://www.nature.com/articles/s41598-021-00723-7
<1% - https://www.geeksforgeeks.org/particle-swarm-optimization-pso-an-overview/
<1% - https://arxiv.org/abs/1401.0546
<1% -
https://www.researchgate.net/publication/365465785_Disturbance_Inspired_Equilibrium_
Optimizer_with_Application_to_Constrained_Engineering_Design_Problems
<1% - https://optimization.cbe.cornell.edu/index.php?title=Particle_swarm_optimization
<1% - https://www.sciencedirect.com/science/article/pii/S0263224121012033
<1% - https://iopscience.iop.org/article/10.1088/1742-6596/1187/4/042077/pdf
<1% - https://arxiv.org/pdf/1308.5369.pdf
<1% - https://www.researchgate.net/publication/338456199_Particle_Swarm_Intelligent
<1% - https://link.springer.com/article/10.1007/s11042-021-10986-x
<1% - https://www.researchgate.net/publication/267367758_An_Overview_of_PSO-
Based_Approaches_in_Image_Segmentation/fulltext/544f58000cf2bca5ce90e204/An-
Overview-of-PSO-Based-Approaches-in-Image-Segmentation.pdf
<1% - https://www.analyticsvidhya.com/blog/2021/06/feature-selection-techniques-in-
machine-learning-2/
<1% - https://iq.opengenus.org/feature-selection-problem/
<1% - https://www.deskera.com/blog/advanced-production-planning/
<1% - https://www.mdpi.com/2072-666X/13/7/1048
<1% - https://www.researchgate.net/figure/Computational-complexity-of-different-
PSO-algorithms_tbl1_265730681
<1% - https://link.springer.com/chapter/10.1007/978-3-319-72347-1_22
<1% - https://link.springer.com/article/10.1007/s10489-021-02625-7
<1% - https://stackoverflow.com/questions/56393191/for-loop-terminating-before-
termination-condition-is-met
<1% - https://testbook.com/question-answer/the-number-of-iterations-required-to-
reach-converg--5e9cccc7f60d5d4c0e006b07
<1% - https://link.springer.com/article/10.1007/s11128-020-02842-y
<1% - https://www.sciencedirect.com/science/article/pii/S0925231221000564
<1% - https://link.springer.com/chapter/10.1007/978-3-662-46008-5_8
<1% - https://www.tandfonline.com/doi/full/10.1080/23311916.2020.1737383
<1% - https://stats.stackexchange.com/questions/206944/how-do-i-know-when-a-q-
learning-algorithm-converges
<1% - https://www.tandfonline.com/doi/full/10.1080/24751839.2018.1423792
<1% - https://www.cs.cmu.edu/afs/cs/academic/class/15462-s13/www/lec_slides/
Jakobsen.pdf
<1% - https://www.bartleby.com/solution-answer/chapter-1-problem-111p-
international-edition-engineering-mechanics-statics-4th-edition-4th-edition/
9781305501607/the-position-coordinate-x-of-a-particle-is-determined-by-its-velocity-
v-and-the-elapsed-time-t-as/e684de07-4633-11e9-8385-02ee952b546e
<1% - https://www.hindawi.com/journals/cin/2013/384125/
<1% - https://www.sciencedirect.com/topics/engineering/termination-criterion
<1% -
https://www.researchgate.net/publication/220444543_A_Convergence_Proof_for_the_Par
ticle_Swarm_Optimiser
<1% - https://www.researchgate.net/figure/Process-of-updating-the-particles-position-
and-velocity_fig5_346933453
<1% - https://www.sciencedirect.com/science/article/abs/pii/S2210650220304727
<1% - https://en.wikipedia.org/wiki/Particle_swarm_optimization
<1% - https://www.ijcaonline.org/volume14/number1/pxc3872331.pdf
<1% - https://link.springer.com/chapter/10.1007/978-981-10-3614-9_9
<1% - https://ieeexplore.ieee.org/document/7002381/
<1% - https://www.sciencedirect.com/science/article/pii/S1877705813001823
<1% -
https://www.researchgate.net/publication/303321583_Topology_Selection_for_Particle_S
warm_Optimization
<1% - https://www.researchgate.net/figure/Velocity-and-position-updates-of-a-
particle_fig1_40422428
<1% - https://www.mdpi.com/2076-3417/13/9/5317
<1% - https://www.sciencedirect.com/science/article/pii/S0263224116300689
<1% - https://ieeexplore.ieee.org/document/8936982
<1% - https://link.springer.com/article/10.1007/s00521-022-08179-0
<1% - https://en.wikipedia.org/wiki/Multi-swarm_optimization
<1% -
https://www.researchgate.net/publication/366830355_Nutcracker_optimizer_A_novel_na
ture-
inspired_metaheuristic_algorithm_for_global_optimization_and_engineering_design_pro
blems
<1% - https://ieeexplore.ieee.org/abstract/document/1304843
<1% - https://www.sciencedirect.com/science/article/pii/S2090447919300425
<1% - https://www.sciencedirect.com/science/article/pii/S0048733315001389
<1% - https://en.wikipedia.org/wiki/Network_topology
<1% - https://katie.cs.mtech.edu/classes/csci441/Slides/32-%20Particles.pdf
<1% - https://www.sciencedirect.com/science/article/pii/S0020025519305249
<1% - https://www.sciencedirect.com/science/article/pii/S0304380011003413
<1% - https://www.mdpi.com/1999-4893/15/6/192
<1% - https://www.sciencedirect.com/science/article/pii/S2210650223000056
<1% - https://link.springer.com/article/10.1007/s10489-021-03005-x
<1% -
https://www.researchgate.net/publication/353330586_Most_Probable_Energy_Distributio
ns_of_Particles_with_Hierarchial_Structures
<1% - https://www.sciencedirect.com/science/article/pii/S0096300314007097
<1% - https://link.springer.com/article/10.1007/s10489-020-01654-y
<1% -
https://www.researchgate.net/publication/345440048_Explicit_diversification_of_search_r
esults_across_multiple_dimensions_for_educational_search
<1% - https://www.sciencedirect.com/science/article/pii/S0020025522001141
<1% - https://ieeexplore.ieee.org/document/6574619/
<1% - https://www.semanticscholar.org/paper/Particle-Swarm-Optimization-Stability-
Analysis-Djaneye-Boundjou/d15b2ae594a1b462301eac86e04ec264ab028db5
<1% - https://ieeexplore.ieee.org/document/4811693/
<1% - https://dl.acm.org/doi/10.1145/1998570.1998580
<1% - https://sci-hub.ru/10.1631/jzus.2005.a0528
<1% - https://ieeexplore.ieee.org/document/1689001/
<1% - https://www.semanticscholar.org/paper/Particle-Swarm-Optimization-Algorithm-
and-Its-A-Gad/a154a3fd60e1d47cfdadd9f2f310bd86cb9b6d20
<1% - https://www.sciencegate.app/document/10.1016/j.swevo.2021.100868
<1% - https://dokumen.tips/documents/particle-swarm-optimization-theoretical-
analysis-particle-swarm-optimization.html
<1% - https://scholar.google.com/citations?user=XRuF7HAAAAAJ
<1% - https://dl.acm.org/doi/10.1145/800257.808905
<1% - https://github.com/ZongSingHuang/Adaptive-Particle-Swarm-Optimization
<1% - https://blogs.ntu.edu.sg/erian-jip/students-list/nandar-lynn/
<1% - https://www.ijcaonline.org/proceedings/ncict/number4/4295-ncict025?
format=pdf
<1% - https://sciendo.com/article/10.2478/fcds-2020-0007
<1% - https://researchr.org/publication/LiuZSSS09
<1% - https://dl.acm.org/doi/abs/10.1109/ICNC.2008.103
<1% -
https://www.researchgate.net/profile/Jing-Liang-28/publication/4170896_Dynamic_multi
-swarm_particle_swarm_optimizer/links/5a69b50eaca2728d0f5e6ed9/Dynamic-multi-
swarm-particle-swarm-optimizer.pdf
<1% -
https://www.scirp.org/(S(czeh2tfqw2orz553k1w0r45))/reference/referencespapers.aspx?
referenceid=379984
<1% - https://researchr.org/publication/PeerBE03
<1% -
https://www.researchgate.net/profile/Ajith-Abraham/publication/225223307_Hybrid_Evo
lutionary_Algorithms_Methodologies_Architectures_and_Reviews/links/
0c9605325304ab023e000000/Hybrid-Evolutionary-Algorithms-Methodologies-
Architectures-and-Reviews.pdf
<1% - https://dl.acm.org/doi/10.1016/j.ins.2015.07.035
<1% - https://scite.ai/reports/hybrid-particle-swarm-based-simulated-annealing-
optimization-KL0Pr2
<1% - https://journals.sagepub.com/doi/full/10.1177/2515245919898657

You might also like