You are on page 1of 103

分类号: 单位代码:

密 级: 公 开 学 号:
22010189

博士学位论文

中文论文题目:
英文论文题目 : AN IPROVED DYNAMIC PARTICLE SWARM
OPTIMIZATION FOR MULTIMODAL DESIGN PROBLEMS IN
ELECTROMAGNETIC DEVICES
申请人姓名: AKARAWATOU ABDOUL-BAQUI

指导教师: 杨仕友

合作导师:

专业名称:电气工程

研究方向:

所在学院: 电气工程学院

论文提交日期 2023 年 月 日
浙江大学硕士学位论文

论文作者签名:_________
指导教师签名:_________
论文评阅人 1:

评阅人 2:

评阅人 3:

评阅人 4:

评阅人 5:

答辩委员会主席:
委员 1:

委员 2:

委员 3:

委员 4:

委员 5:

答辩日期:2023 年 月 日

II
浙江大学硕士学位论文

AN IMPROVED DYNAMIC PARTICLE SWARM OPTIMIZATION FOR


MULTIMODAL DESIGN PROBLEMS IN ELECTROMAGNETIC DEVICES
Dissertation Submitted to
Zhejiang University
in partial fulfilment of the requirement,
for the degree of
MASTER
in
Electrical Engineering
by
AKARAWATOU ABDOUL-BAQUI

Dissertation Supervisor: Professor Shiyou Yang

, 2023

III
浙江大学硕士学位论文

Author’s signature: __________


Supervisor’s signature: _________

External Reviewers:
Anonymous

Examining Committee Chairperson: Prof.

Examining Committee Members:

Prof.

Prof.

Prof.

Prof.

Prof.

_________________________

Date of oral defense:________________

IV
浙江大学硕士学位论文

浙江大学研究生学位论文独创性声明

本人声明所呈交的学位论文是本人在导师指导下进行的研究工作及取得的研究成果。
除了文中特别加以标注和致谢的地方外,论文中不包含其他人已经发表或撰写过的研
究成果,也不包含为获得浙江大学或其他教育机构的学位或证书而使用过的材料。
与我一同工作的同志对本研究所做的任何贡献均已在论文中作了明确的说明并表示谢
意。

学位论文作者签名: 签字日期: 年 月 日

学位论文版权使用授权书

本学位论文作者完全了解浙江大学有权保留并向国家有关部门或机构送交本论文的

复印件和磁盘,允许论文被查阅和借阅。本人授权 浙江大学可以将学位论文的全部
或部分内容编入有关数据库进行检索和传播,可以采用影印、缩印或扫描等复制手段
保存、汇编学位论文。

(保密的学位论文在解密后适用本授权书)

学位论文作者签名: 导师签名:

签字日期: 年 月 日 签字日期: 年 月

The Originality Statement of the Graduate Degree Thesis of


Zhejiang University

V
浙江大学硕士学位论文

I declare that the dissertation submitted is the research work carried out by myself under the
tutor's guidance and the research results that have been obtained. In addition to the special
marking and acknowledgement of the text, the paper does not contain research results
published or written by others, nor does it contain any material used to obtain a degree or
certificate from a Zhejiang University or other educational institution. The comrades working
with me have made clear and grateful contributions to any contributions to the Institute.

Author's Signature: Signature Date:

Authorization for the use of copyright in academic papers


The author of this dissertation fully understands that Zhejiang University has the right to
retain and send copies and disks of this paper to relevant departments or agencies of the State,
allowing the papers to be consulted and borrowed. I authorized Zhejiang University All or
part of the dissertation can be incorporated into the database for retrieval and dissemination,
and the thesis can be saved and compiled by copying, such as photocopying, printing or
scanning.

(Confidential degree papers apply to this Authorization after decryption)

Author Signature: Teacher's Signature:

Date of signature: Date of signature:

VI
浙江大学硕士学位论文

Dedicated

To

My family for their love

VII
浙江大学硕士学位论文

ACKNOWLEDGMENT
First and foremost, I would want to thanks and appreciate my supervisor professor DR.
SHIYOU Yang, Shah Fahad and Shoaib Ahmed Khan for their advice support and
encouragement during my engineering studies, I am thankful to them for patience and
support. Next, I want to express my gratitude to the government of PR China for provision of
the Chinese scholarship (CSC) scholarship and financial assistance.

Special thanks to my lovely wife and my parents who always supported me to continue my
education as well as my great father and brother for their prayers.

Finally, I’d like to express my gratitude to my friend’s lab colleagues, Zhejiang university for
motivating me to pursue Eng. research.

VIII
浙江大学硕士学位论文

ABSTRACT
Electromagnetic inverse problems have been investigated for more than a decade. In general,
it sometimes refers to the optimal electromagnetic device design that occurs naturally in many
engineering disciplines.

Recent tactic to solve the electromagnetic inverse problems is to split them into a number of
direct problems and then to solve them by using a stochastic optimal technique. Thus, the
numerical techniques and stochastic algorithms play primary roles for the solution of
electromagnetic inverse problems. Afterwards, the finite element analysis is required at each
iteration, meaning that the computational cost of an inverse problems is always very high as
compared to a direct problem. Consequently, many efforts have been made to improve the
general structure of the stochastic algorithms for solving these problems and many real world
electromagnetic inverse problems that have been solved by using these stochastic techniques.

Moreover, in the manufacturing of an optimal design, it generally includes the optimal


solution of inverse problems which consist of determining the global optimal solution of an
objective function(s) under some given constraints. Since the objective function is generally a
multimodal one and because of the inefficiency of traditional deterministic and stochastic
optimal algorithms in finding the global optimal solution of such a problem, the attentions of
many researches are devoted to the development of new stochastic optimal methods.
Consequently, the evolutionary algorithm (EA) has been developed and become the standard
for solving global optimizations in different engineering disciplines because it can find global
optimal solutions that are otherwise not obtainable using traditional optimal algorithms.
Nevertheless, according to the no free lunch theorem there is no any universal optimizer that
can solve all optimization problems. Thus, it is necessary to seek new global optimizer in the
study of inverse problems, and there is a need to keep the diversity of evolutionary algorithms
in solving inverse problems.

A stochastic optimal search algorithm called particle swarm optimization was developed in
response to observations of bird and fish flocks and schools. To support the diversity of the
swarm and maintain the balance between exploitation and exploration searches, the suggested
modified PSO algorithm is supplied with some specially designed mechanisms of adaptively
updating algorithm parameters to preserve the diversity of the swarm. The suggested
approaches assist the algorithm in strengthening its robustness and preventing premature

IX
浙江大学硕士学位论文

convergence. Experiments are carried out on various complex, unimodal, and multimodal test
functions specifically on the common engineering inverse TEAM Workshop problem 22.

The proposed algorithms (three modified models) were examined by using some standard
mathematical test functions and an electromagnetic design problem: TEAM workshop
problem 22, and its performances were then compared with existing methods. The numerical
results and statistical analysis show the merits and superiority of the proposed method
compared with other well developed stochastic PSO counterparts.

Keywords: PSO algorithm, Electromagnetic inverse problem, Optimization design, Finite


element method

X
浙江大学硕士学位论文

CONTENTS

ACKNOWLEDGMENT.....................................................................................................................VIII
ABSTRACT..........................................................................................................................................IX
CONTENTS..........................................................................................................................................XI
ABBREVIATIONS.............................................................................................................................XIII
CHAPTER 1..........................................................................................................................................1
INTRODUCTION..................................................................................................................................1
1.1 Background of Inverse Problems..................................................................................................1
1.2 Inverse Problem Description in General.......................................................................................4
1.2.1 Linear inverse problems........................................................................................................4
1.2.2 Non-linear inverse problems..................................................................................................4
1.3 Inverse Problem in Engineering...................................................................................................4
1.4 Inverse Problem in Electromagnetic domain................................................................................8
1.5 Superconducting Magnetic Energy Storage (SMES) Device........................................................9
1.6 Scope of the thesis......................................................................................................................11
1.7 Research Contribution................................................................................................................12
CHAPTER 2.........................................................................................................................................13
A Brief Review of Optimizations Techniques......................................................................................13
2.1 Overview....................................................................................................................................13
2.2 Classification of Optimal Algorithms.........................................................................................15
2.3 Engineering Optimization...........................................................................................................16
2.4 Stochastic Optimization..............................................................................................................16
2.5 Metaheuristic Algorithms...........................................................................................................18
2.6 Optimization Techniques in a Variety of Applications...............................................................19
2.7 Swarm Intelligence.....................................................................................................................27
2.8 Conclusion..................................................................................................................................28
CHAPTER 3.........................................................................................................................................29
PSO ALGORITHM..............................................................................................................................29
3.1 Overview....................................................................................................................................29
3.2 History........................................................................................................................................31

XI
浙江大学硕士学位论文

3.3 Expression of theoretical analysis...............................................................................................35


3.4 Algorithm Design.......................................................................................................................37
3.4.1 The choice strategy..............................................................................................................37
3.4.2 Modification in particle’s updating formula........................................................................38
3.4.3 Modifying velocity update strategy.....................................................................................39
3.4.4 Hybridization of PSO..........................................................................................................39
3.5 Parameter Selection....................................................................................................................40
3.5.1 The Inertia Weight...............................................................................................................40
3.5.2 Social and Cognitive Parameters.........................................................................................40
3.5.3 Population Initialization.......................................................................................................41
3.6 Topological Configuration..........................................................................................................42
3.7 Artistic status of the PSO............................................................................................................42
3.8 Conclusion..................................................................................................................................50
Chapter 4..............................................................................................................................................51
Improved Dynamic PSO for Problems in Electromagnetic Devices.....................................................51
4.1 Overview....................................................................................................................................51
4.2 A Modified Particle Swarm Optimization with a Smart Particle................................................52
4.3 The proposed modified PSO.......................................................................................................54
4.4 Dynamic inertia weight..............................................................................................................56
4.5 Learning parameters...................................................................................................................56
4.6 Convergence performances of different optimal algorithms.......................................................56
4.7 Results after testing all algorithms..............................................................................................58
4.8 TEAM WORKSHOP Problem 22..............................................................................................60
4.8.1 Description..........................................................................................................................60
4.8.2 Application..........................................................................................................................61
4.9 Conclusion..................................................................................................................................62
Chapter 5..............................................................................................................................................63
CONCLUSION....................................................................................................................................63
5.1 Conclusion..................................................................................................................................63
5.2 future work.................................................................................................................................64
Lists of Publications.........................................................................................................................65
References........................................................................................................................................66

XII
浙江大学硕士学位论文

ABBREVIATIONS
PSO Particle Swarm Optimization

IPSO Improved Particle Swarm Optimization

C1 Cognitive Constant

C2 Social Constant

Pbest Personal Best

Gbest Global Best

W Inertia Weight

SPSO Smart Particle Swarm Optimization

CF Convergence Factor

AIWF Adaptive Inertia Weight Factor

GA Genetic Algorithm

SCA Sine Cosine Algorithm

GWO Grey Wolf Optimizer

DE Differential Evolution

PV Photovoltaics

Mg Maximum Generation

Rly Rayleigh's method

Std Students

Out Outcome

Q Mutation Operator

XIII
浙江大学硕士学位论文

SMES Super Conducting Magnetic Storage System

TEAM Testing Electromagnetic Analysis Method

OF Objective Function

Ji Current Coil Density

Bmax Maximum Magnetic Flux

Wm Magnetic Energy

EA Evolutionary Algorithm

CI Computational Intelligence

EC Evolutionary Computation

SFO Sun-Flower Optimization

ACO Ant Colony Optimization

SOA Seagull Optimization Algorithm

AOA Arithmetic Optimization Algorithm

EO Equilibrium Optimizer

BOA Butterfly Optimization Algorithm

GBO Gradient Based Optimization

MFO Moth-Flame Optimization

CSA Crow Search Algorithm

XIV
浙江大学硕士学位论文

CHAPTER 1

INTRODUCTION

1.1 Background of Inverse Problems

The best approach to dealing with the inverse problem is to break down the electromagnetic
inverse problem into a series of direct problems, which are then solved using an optimal
approach. Numerical approaches and stochastic algorithms play a crucial role in solving the
inverse problem. Furthermore, each iteration generally requires finite element analysis,
resulting in a relatively high computational cost for the objective function in inverse problems
compared to direct problems. Consequently, numerous researchers have made efforts to
enhance the overall structure of stochastic algorithms to address these challenges. These
algorithms have been successfully applied to solve a wide range of real-world technical
inverse problems.

Inverse problems persist in various scientific fields, where the retrieval of a function or other
high-dimensional measure is typically based on a collection of indirect parameters. In recent
times, inverse problems have gained significant attention due to their practical solutions,
which have the potential for scientific and commercial benefits. The nonlinearity and
complexity of the inverse problem make it more challenging to solve compared to the direct
problem. A well-posed problem is one that has a unique and continuous solution with respect
to the data [1].

If any of these requirements are not fulfilled, the problem is considered "improperly
formulated" or "misconceived." Inverse problems are generally not suitable for situations
where momentary fields generated by lossy mediums or small geometric properties are
involved. As a result, several questions commonly emerge in such cases:

a) There is a gap in the data: the observed data is incomplete or partial, for example, due to a
lack of geographical or relevant information in the measurement data.

b) The experimental data is often contaminated by overcrowded data and random interference,
which leads to pollution or distortion.

c) In some cases, observational data may not be available, and the direct model alone may not
be able to provide the desired solution for an optimization problem.

1
浙江大学硕士学位论文

d) Inaccurate processes and numerical approximations can generate nonphysical solutions,


causing instability due to poorly stated system equations and limitations in solving discrete
models.

Experiments in input-output or source-effect relationships were originally employed in


engineering during the last century to measure unknowns of physical equations. Detecting the
missing model settings is crucial to understanding the cause of the problem and achieving a
solution. Consequently, comparing experimental data to the results of numerical simulations
often leads to conflicting outcomes.

A mathematical concept known as the inverse problem is used to extract information about a
physical element or process from digital data. Since we cannot directly observe the physical
parameters, solving this problem is essential. Inverse problems are therefore considered the
most important and extensively studied mathematical problems in technology and industry
[3].

However, inverse problems are often poorly formulated or misunderstood, as Haddard pointed
out. Complete measurement data alone does not guarantee a solution, and the solution is not
unique. In our daily lives, we encounter inverse problems that can be easily solved if we are in
good health. Consider our visual perception, for instance. Our eyes receive visual input from a
limited number of points, allowing us to navigate and interact with our environment.

When we encounter complex events or attempt to solve high-risk problems, we often find
ourselves in an unfavorable (misplaced) situation. In our daily lives, we frequently encounter
ambiguous statements. We are all familiar with how easy it is to make mistakes when
reconstructing past events based on current evidence, such as reconstructing a crime scene
using direct and indirect evidence or diagnosing a disease based on medical test results.
Weather forecasters or doctors who utilize NMR imaging to study the structure of the brain
are experts in predicting the future and exploring inaccessible areas [5].

Inverse problems may not have a solution in the desired domain, may have multiple solutions,
or may have an inadequate solution process where small measurement errors can result in
infinitely large solutions. To address inverse problems, it is necessary to first solve a direct
problem. Since "inverse" is the opposite of "direct," it implies going back or reversing the
process. Consider applied mathematics problems, where modeling specific physical fields or
processes is a common challenge.

2
浙江大学硕士学位论文

A typical direct problem involves constructing a function that describes a physical field or
process at any given time (if the field is non-static). If the process is not static, the boundary
conditions of the domain should be included in the formulation of the direct problem [4].

As a result, the perception of poorly represented inverse problems is one of the active areas of
research in engineering, with efforts focused on solving inverse problems directly or
indirectly. Although many concepts are still under discussion and investigation, significant
progress has been made. Additionally, electrical engineers have long been interested in design
optimization.

Since the 1950s, deterministic and analytical methods have been utilized to solve these
problems, including gradient-type algorithms, simple approaches, and Lagrange multiplier
methods. In recent years, non-deterministic strategies based on computer intelligence, such as
artificial bee colonies and genetic algorithms, have emerged as significant tools in the field of
artificial intelligence. Inversion involves the identification of sources and materials in the
electromagnetic field, with established standards determining the desired behavior of the
system [3].

The inverse problem, also known as design optimization, is crucial for addressing the
following problems:

a) Achieving economic efficiency in equipment optimization projects while complying with


field dispersion requirements.

b) Making decisions regarding resource allocation to meet the field distribution requirement,
often utilizing electromagnetic field analysis in non-destructive testing.

c) Increasing material quantities that are essential in both electrical and biological
engineering.

Impedance imaging, for example, maps the body's electrical variations by injecting charge
and measuring current [5]. Due to the complexity of inverse problems, it has taken decades of
research to overcome them [6]. As a result, numerous solutions have been developed and
refined over the course of a century, making the resolution of inverse problems more
accessible. Consequently, new solutions and methods have been developed to address these
challenges. In both cases, the regulation of inverse problems has improved device
performance and long-term life expectancy.

3
浙江大学硕士学位论文

1.2 Inverse Problem Description in General

An inverse problem refers to a scenario where the objective is to determine the optimal model
parameters β in order to achieve the following:

(1.1)

In which G is termed the forward operator and represents the explicit link among

experimental data and y the model parameters, where is the unknown input to be
estimated.

1.2.1 Linear inverse problems

The data y and optimal model are vectors in a discrete linear inverse problem representing
a linear system, and the problem may be written as,

(1.2)

Where G represents observation matrix or matrix operator.

1.2.2 Non-linear inverse problems

A non-linear inverse problem is a more complex category of inverse problems. The


relationship between data and model in non-linear inverse issues is more challenging, as
indicated by the equation:

(1.3)

In the above equation G is a non-linear function that cannot be split to represent a linear

mapping of the model parameters that make up onto the data.

1.3 Inverse Problem in Engineering

Similarly, an inverse problem in engineering is a way of calculating the causative elements


that created a collection of observations, such as creating a picture in computer tomography, a
source in acoustics, or the earth's density using gravity field data. In consequence, an inverse

4
浙江大学硕士学位论文

problem begins with the outcomes and works backwards to find the reasons. This is the
opposite of a forward issue, which begins with the causes and ends with the results.

In [7], engineering and mathematics, the important inverse problems are those that shed
information on factors that we cannot directly study. Inverse problems have been widely and
successfully applied in mine detection and explorations, acoustics, biomedical imaging,
communication systems, remote sensing, non-destructive testing, oceanography, geophysical,
machine learning, non-destructive testing, and electromagnetics.

It is expected that the parameters of electric devices are known before assessing them.
Element properties and source characteristics must be determined before an electric circuit
may be evaluated. The computation of circuit voltages and currents, as well as other resulting
measures like real and reactive power, is thus possible. Similarly, the circuit's frequency-
dependent or transient characteristics can be identified [8].

Electromagnetic field sources (currents and charges), spatial distribution, and temporal
dependency are typically presented. Furthermore, functions such as c (x , y , z) (dielectric
permeability), w ( x , y , z ) (electric conductivity) and y ( x , y , z ) (magnetic permeability) must
be defined to describe the spatial dependency of ferromagnetic, dielectric, and conducting
medium features. The differential and integral field characteristics, such as flux density, field
intensity, inductance, capacitance, as well as pressures, moments of forces, and forces may be
calculated based on the material properties, the sources, and the system geometry [9].

Engineers in electrical engineering face "inverse problems" when they need to define a
system's components to accomplish desired results. The answers to the inverse problem for
electric circuits include the circuit architecture, values of circuit element characteristics, and
time-dependent changes in sources [10].

In electromagnetics, the inverse problem would have been determining the source and
material distribution patterns required to achieve a certain field distribution [11].

The parameters of devices in inverse problems are decided by the conditions (criteria)
provided in the problem formulation. These criteria reflect the system's desired ideal behavior
and/or properties. The following are some instances of such criteria:

a) The amount of reactive power that a circuit uses should be predetermined.


b) An electric circuit's voltage between its nodes must be less than just a preset value
during a transient.

5
浙江大学硕士学位论文

c) It is important to ensure that the magnetic field intensity in a given area is uniform,
and that a conductor's cross-sectional shape minimizes conductor losses.

The following operator equation connects the variables in an inverse problem, defining the
physical appearance of the device for which the problem is described:

A ( p ) w=v (1.4)

Where is an unknown vector susceptible to definition (device-optimal

parameters or a source-specific vector), is the variables of vector, and

where the sources of the vector is .

For each , you may find the inverse problem variables w using equation (1.1). Most of the
time, it is necessary to change the variables w to some other variables in order to make the
inverse problem criterion easier to understand. It will also be known as the properties vector.
The following is an expression for the transformation from the property vector x to the
property vector y :

(1.5)

Where f is generally often a vector function that is nonlinear.

The idea of a functional may be employed while developing and addressing inverse problems.
A functional (I) is a scalar function that is determined by a series of functions. In this
explanation, the terms "objective function" and "functional" will be recycled alternately [12].

Functionally include the definite integral ¿ of a function and its maximum values within a
certain interval. The closeness of function y to the required variable y is a common electrical
engineering functional:


I ( p , w)=‖ y − y ( p , w)‖
(1.6)

To describe the closeness of the functions y, any of the following commonly used norms may
be employed [4]:

( ) ( )
T 1 T 1
1 1
‖.‖2 ≡ ∫
T −t 0 t
h2 (t)dt 2 ,‖.‖q ≡ ∫
T −t 0 t
hq (t)dt
q
∨ ‖.‖∞ ≡ max |h(t)|
t ∈ [ to , T ]
(1.7)
o o

6
浙江大学硕士学位论文

Where is a function of any type and . It should be noted that when applying

the model , because the function "max" is not normally continuous, the functional
cannot be differentiated.

Suppose the ideal attributes of a device are described by a vector y that is the inverse of an

inverse problem. The criterion for this inverse problem may thus be written as min .

Consider the following examples. In linear DC circuit theory, the vectors of voltages and
currents in circuit twigs parallel to the vector of variables w, while the vectors of voltage

sources and current sources in circuit branching correspond to the vector of sources .
Ohm's law and Kirchhoff's equations may be summarized as follows [12]:

CUC=CV , RIU =U ∧DI=DJ .

(1.8)

Here, and represent the circuit graph cut sets and contours, whereas represents the
resistances of the circuit branching as a diagonal matrix. In this case, the objective parameters

are the resistances of the circuit branches of first and second therefore is the
formula to use.

If the inverse problem entails determining a vector that produces the voltage transfer

factor contiguous to a arranged value formerly the vector of

presences of the inverse problem comprises only one component i.e., it is


a scalar.

The inverse problem's standard can be written as follows: it is a scalar,


has just one element if solving for the inverse issue requires finding a vector p that has a

voltage transfer factor that is as near as possible to the given value

. So it is possible to express the inverse problem standard as follows:

7
浙江大学硕士学位论文

(1.9)

Take, for example, is a vector formed by the electrostatic field's potential , which is
generated by electric charges at grid points. After then, charges with a density in the grid
nodes create the vector .

The equations that relate these quantities are as follows:

(1.10)

Various forms of inverse problems must be handled in both electromagnetic field and circuit
theory [7]. Their definitions might be quite different. Each sort of problem has characteristics
that should be considered when solving it.

Now let us take a closer look at the many forms of inverse problems that may be handled in
electro-technics. The mentioned problems might be classified as synthesis or identification
problems. Furthermore, the synthesis concerns include structural and parametric synthesis
problems. Identification problems are associated to diagnostics, macro-modeling, and defecto-
scopy.

1.4 Inverse Problem in Electromagnetic domain

In the context of electromagnetic field theory, synthesis problems involve choosing a source
with a value that closely matches the desired field. It is crucial to ensure that the required field
exhibits a wide spatial and/or temporal distribution along a line, surface, or within a specified
volume. The currents or charges needed to achieve this scenario are represented by the
symbol "A". Similarly, in the context of media distribution and search considerations, the
structure and form of bodies are often determined to closely match the specified requirements.
In such cases, the properties of the media are used to construct the "p-vector" [6].

In practical challenges related to electromagnetic field synthesis, a functional prototype of the


essential circuit or electrical device is typically available. An optimization or optimum design
problem arises when the objective is to find the best settings for improving the parameters of
the prototype. The same problem can be viewed as either an optimization or a synthesis

8
浙江大学硕士学位论文

problem. In electromagnetic field theory, the term "optimization" is more commonly used,
while the term "synthesis" is more prevalent in electric circuit theory.

Difficulties arise in identifying mathematical models for electrical components, as they are
often based on the principles deduced from existing data that capture the input-output
characteristics of the component. Since this data is often sparse, the resulting mathematical
model is designed to accurately describe the observed behavior.

In electromagnetic field theory, the identification of sources or media involves analyzing field
measurements obtained from a given surface. Based on the field characteristics derived from
measurements on a visible surface, assumptions are made regarding the properties of the
media and sources that may not be directly observable from the measurements. These
considerations are particularly relevant in geological prospecting, where the distribution of
electric potential recorded on the surface allows for the assessment of the underlying media
structure. Non-destructive testing companies, for example, utilize eddy currents and generated
voltages to locate metal defects [7].

1.5 Superconducting Magnetic Energy Storage (SMES) Device

Superconducting Magnetic Energy Storage (SMES) is an energy storage device that utilizes a
superconductor to store direct current (DC) electricity within a magnetic field. This magnetic
field is generated by a conductor that becomes a superconductor when cooled to cryogenic
temperatures. Consequently, the superconductor exhibits no resistive losses. SMES enables
persistent storage, allowing the energy to be retained until it is needed. A complete SMES
device consists of several main elements, including a power conditioning system (PCS), a
superconducting coil with a magnet (SCM), a control unit (CU), and a cryogenic system (CS).

The versatility of SMES technology extends to various energy and electrical power systems.
Its applications encompass plug-in hybrid automobiles, microgrids, electrochemical battery
energy storage systems (BESSs), compressed air energy storage, hydrogen storage,
hydroelectric systems, thermal energy storage, flywheel energy storage, supercapacitors,
liquid metal batteries, cryogenic energy storage, pumped thermal electricity storage, and
renewable energy sources such as wind and photovoltaic systems. It is compatible with both
direct current (low and medium voltage) systems and alternating current power systems.
Additionally, it finds utility in fuel cell technologies and battery energy storage systems [13].

9
浙江大学硕士学位论文

Every storage system comes with its own set of advantages and disadvantages. For instance,
battery energy storage systems (BESSs) are constrained by voltage and current limitations,
and the use of toxic substances in batteries can harm the environment. Hydroelectric pumped-
storage systems are location-dependent and may not be suitable for everyday applications. In
contrast, SMES addresses these limitations, offering a viable solution for storing electricity.
Its wide range of applications includes electrical energy and power systems [14].

In SMES, energy is stored within a coil when a direct current voltage is applied across its
terminals. Even after the voltage source is disconnected, the coil maintains its current flow.
This is due to the extremely low resistance exhibited by a superconducting coil when cooled
below its melting point. The energy stored in the magnetic field created by the intrinsic
current becomes almost negligible. To release the stored energy, discharging coils can be
employed. SMES demonstrates higher efficiency than conventional coils due to its ability to
transition rapidly from a fully charged to a fully discharged state. Additionally, SMES self-
discharges relatively quickly due to its self-cooling mechanism using cryogenic liquid [15].

Coil-to-grid power conditioning facilitates the connection of an exceptionally conductive coil


to the grid. This setup serves as a backup power source during outages by providing a
continuous supply of electricity to the grid. The design of these superconductive coils
significantly impacts the overall performance of the SMES system. The development and
optimization of coil shape are crucial stages in this process [16].

The advantages of solenoid geometry over toroid design are numerous, including greater
mechanical stress bearing capacity and lower overall cost. For SMES systems utilizing low-
temperature superconductors and small-scale configurations, solenoid geometry is a favorable
choice. It eliminates the need for recompression and simplifies the coiling process [18]. In
contrast to the traditional solenoid coil consisting of a single long coaxial coil, a step-shaped
cross-sectional design incorporates multiple shorter coaxial coils. Research has shown that a
thin solenoid coil with a height (h), radius (R), and an aspect ratio (β=h/2R) proves to be an
ideal shape for superconductor-based data storage in this particular geometry [15], [19].

2/ 3 −1 /3
Qsc E B (1.11)

Where solenoid is indicated "s." Per conductor, the stored energy is represented in terms of:

10
浙江大学硕士学位论文

−1 1/ 3 1 /3
K s ( β) E B
(1.12)

In the realm of 2G HTS (second-generation high-temperature superconducting) materials,


toroid forms are often favored over other geometries. The unique characteristics of toroidal
coils make them highly suitable for various applications. One of the key advantages is that
toroids eliminate the need for shielding due to their ability to minimize the perpendicular
component of the magnetic field on the conductor. As a result, there are fewer stray fields and
lower losses in alternating current. This feature contributes to improved efficiency and
performance.

There are two common methods for constructing toroidal coils: helical toroids and modular
toroids. Helical toroids are formed through a continuous helical winding process, resulting in
a single, continuous coil with a toroidal shape. On the other hand, modular toroids are created
by assembling a sequence of small solenoids, which are individual coil segments. These
modular segments are carefully arranged to form the desired toroidal shape. Both approaches
have their merits and can be employed based on specific design and operational requirements
[20].

1.6 Scope of the thesis

The thesis proposes three modified models of Particle Swarm Optimization (PSO) for
optimizing a superconducting magnetic energy storage device. These novel approaches are
assessed using both a well-known superconducting magnetic energy storage device, known as
TEAM Problem 22, and mathematical benchmark test functions. The results demonstrate
significant improvements in the optimization process.

The document is structured as follows:

Chapter 2 provides a brief review of optimization techniques, offering an overview of


different methods used in the field. Chapter 3 presents the theoretical analysis and algorithm
design, providing a detailed explanation of the proposed modifications to PSO for optimizing
electromagnetic devices. Chapter 4 focuses on the improved dynamic PSO approach
specifically tailored for solving problems in electromagnetic devices. This chapter delves into
the intricacies of the proposed model and its advantages in optimizing such systems. Chapter
5 concludes the thesis. Overall, this thesis aims to enhance the efficiency and effectiveness of

11
浙江大学硕士学位论文

optimization techniques applied to superconducting magnetic energy storage devices through


the introduction of novel PSO modifications. The findings highlight the improvements
achieved and contribute to the existing body of knowledge in this field.

1.7 Research Contribution

PSO is a relatively new addition to the field of evolutionary algorithms. It was developed by
Kennedy and Eberhard, drawing inspiration from the social behavior of birds flocking and
fish schooling in their quest for food [1]. The PSO algorithm is conceptually simple and easy
to implement numerically. It is a global optimization technique that utilizes a population,
where each member represents a particle and serves as a potential solution to the optimization
problem. In PSO, individuals navigate a multidimensional space with associated velocities.

However, compared to other well-established stochastic methods, PSO is still considered an


emerging methodology and is in its early stages of development. One limitation is its
relatively lower global search ability. For instance, PSO may encounter premature
convergence when attempting to find global optima for challenging optimization problems.
Additionally, the algorithm may experience stagnation phenomena, leading to being trapped
in local minima.

The aim of this dissertation is to enhance the accuracy of damage detection and localization in
structures. The proposed approach introduces dynamic inertia weight, which helps strike a
balance between local and global searches, thereby improving the overall performance.

12
浙江大学硕士学位论文

CHAPTER 2

A Brief Review of Optimizations Techniques

2.1 Overview

In recent times, researchers, engineers, economists, and managers have faced numerous
technology-related decisions concerning the construction and maintenance of their respective
systems. As the world continues to modernize, become more complex, and grow increasingly
competitive, decision makers are compelled to act in an optimal manner. Consequently,
optimization methods have gained immense importance in achieving the best possible
outcomes under such circumstances.

The concept of optimization emerged in the 1940s when the British military encountered the
challenge of efficiently allocating limited resources, such as fighter airplanes, submarines, and
others, to various tasks. Since then, researchers, mathematicians, and scientists have devoted
significant efforts to exploring diverse solutions for both linear and non-linear optimization
problems.

In the present era, optimization has gained widespread significance due to the rapid depletion
of inexpensive energy sources and the necessity to maximize profits while also striving to
preserve and enhance the natural environment. This heightened focus on solving complex
engineering challenges, particularly those that involve multiple dimensions and intricate
mathematical considerations, has become a prevailing trend. Minimization and maximization

are the simplest forms of optimization. In the domain , the function has

a minimum value of . When the function is simple enough , it is


feasible to utilize the first derivative to control probable locations and the second derivative

to control whether the solution is a minimum or maximum. Nonlinear, multimodal,


and multivariate functions, on the other hand, have a difficult time with this. As a result, it
might be difficult to discover derivative information for some functions that have

13
浙江大学硕士学位论文

discontinuities in them. Because of this, traditional methods like hill-climbing may confront a
wide range of difficulties [21].

The following is a common way to express an optimization problem:

(2.1)

Where

Where f1,..., fI are the objectives, and and are the equality and inequality constraints,
respectively. It is a single-objective problem when , whereas it is a multi-objective
optimization problem when , and the strategy to solve it differs from one to the other.

In general, the functions f i, h j, and gk are nonlinear. In the case where all of these functions
are linear, the optimization issue may be addressed using the traditional simplex approach.
Metaheuristics in optimization may be used to solve a wide range of nonlinear optimization

issues. If is substituted for in the formula, the minimization issue may be written as a

maximum problem. If is replaced with , then 's inequality restrictions can be


expressed in the opposite order [22].

Multimodal cost functions are frequently used in electrical engineering design challenges,
including inverse and optimum design problems. In contrast to an optimal design problem,
which has several solutions, an inverse issue is ill-conditioned and has a non-unique solution.
The non-uniqueness issue of the solutions to the problem is not addressed in this chapter;
instead, the focus is solely on finding an answer to an inverse or optimal problem. There has
recently been an increasing effort, notably to explore stochastic and heuristic algorithms,
because the widely-used, traditional optimal methods are unsuited for discovering global
optimal solutions. Evolutionary algorithms (EAs), such as genetic algorithms, tabu searches,
ant colonies, and simulated annealing, have a great capacity for global searches.

Simultaneously, investigators have attempted to build numerous nature-inspired algorithmic


frameworks that are state-of-the-art in order to improve computational power and the variety

14
浙江大学硕士学位论文

of search space in engineering function optimization. More effort will be devoted to


optimization approaches in order to tackle engineering optimization challenges emanating
from electromagnetics.

We know from prior studies that optimization problems contain several minima and one
optimal solution, but the stochastic approach would attempt to attain the global optimum
region or space. One of these approaches' flaws is that they have a poor rate of convergence or
need more computational adjustments. Such strategies are critical in enhancing and making
the algorithms more effective while constructing a pretty good balance between simplicity,
reliability, and computational performance in order to eliminate needless computational
involvement and produce a robust method for the case study.

2.2 Classification of Optimal Algorithms

Recently, various judgments on technology have been required of researchers, engineers,


economists, and managers for the development and upkeep of their respective systems.

Optimization algorithms can be separated into gradient-based and gradient-free procedures if


a function's derivative or gradient is the emphasis. Gradient-based algorithms, like hill-
climbing, heavily rely on derivative information and are typically extremely efficient in their
use of such information. If you don't need any derivative information, you can use a
derivative-free algorithm. When functions are discontinuous or computing derivatives is
costly, derivative-free methods like the Nelder-Mead downhill simplex are expedient [23].

There are two types of optimization algorithms: deterministic and stochastic optimization
algorithms. In computing and engineering, a deterministic algorithm is one that doesn't
introduce any randomness into its calculations. If we start at the same place, this algorithm
will get the same result. The deterministic algorithms hill climbing and downhill simplex are
prominent examples. Any algorithm can use the same starting point as long as there is some
Randomness, even if it will arrive at a different location each time. A good example of a
stochastic algorithm is the genetic algorithm. Algorithm categorization can be aided by the
capacity to search. Local and global search algorithms are categorized in this manner. As a
result, local search algorithms are generally deterministic and unable to break out of local
optima such as hill-climbing, rather than looking for the global optimal answer. Local search
algorithms are not the ideal choice when it comes to global market optimization. Even if they
are not always effective or efficient, contemporary metaheuristic algorithms are suitable for

15
浙江大学硕士学位论文

global optimization in the majority of circumstances. Using a basic method like hill climbing
with random restarts, it is feasible to change a local search algorithm into one that can do
global searches. Global search methods benefit greatly from randomization [24].

2.3 Engineering Optimization

Evolutionary algorithms are commonly employed in engineering design challenges to seek the
optimal solution. While these theoretical responses hold significance, real-world engineering
design problems are often riddled with uncertainties that are frequent, inevitable, and cannot
be avoided. For example, precise manufacturing according to design specifications can be
challenging, necessitating the consideration of production uncertainties [25].

Consequently, operational and decision parameters of equipment tend to change over time. If
the optimal value is highly sensitive to even slight deviations from expected variables, a small
adjustment in the optimized parameters could lead to substantial performance loss or
impractical solutions. Hence, the focus should shift towards recommending a robust design
that not only demonstrates high performance in terms of the objective function but also
exhibits resilience against minor perturbations. This shift towards robust optimum design has
gained significant attention in the field of computational electromagnetics.

Robust design strategy effectively mitigates the impact of product-to-product variations,


production inconsistencies, and environmental factors on the functionality of a product.
Although its importance has been widely acknowledged across engineering disciplines,
extensive research on resilient design approaches has only been conducted for a relatively
short period of fewer than 10 years.

As a result, numerous unresolved concerns, both practical and theoretical, still remain. Given
these rigorous standards, substantial efforts have been dedicated to the search for robust
solutions alongside traditionally high-quality ones. Robust design approaches have emerged
as a prominent topic within the domains of evolutionary computation and engineering design
over the past few years [26].

2.4 Stochastic Optimization

Researchers in both engineering and science demonstrate a keen interest in the field of multi-
objective optimization or vector optimization. This interest arises not only due to the

16
浙江大学硕士学位论文

prevalence of multi-objective structures in real-world problems but also because there are
various unresolved concerns surrounding this topic. Surprisingly, a universally accepted
definition of "optimum" in the context of multi-objective optimization is yet to be established,
making it challenging to compare results obtained through different techniques. Ultimately,
the determination of the optimal solution rests with the decision-maker. The solution set of an
optimization problem encompasses all choice vectors for which the objective vectors cannot
be improved without compromising another dimension. When addressing a multi-objective
inverse problem with competing objective functions, the resulting solutions constitute a set of
non-dominated solutions commonly referred to as the Pareto front or Pareto optimal solutions
[27].

Consequently, an ideal multi-objective optimization technique should be capable of


identifying and sampling the Pareto front. This enables decision-makers to make well-
informed choices based on the system's operational circumstances and environment. Scientists
in the computational electromagnetics community are currently developing vector
evolutionary algorithms (EAs) that can effectively tackle multi-objective inverse problems.
As engineering and numerical methodologies advance, these algorithms possess elite
structures that enable the exploration of alternative approaches within a single run. While
constructing vector EAs, two critical concerns that need to be addressed are the fitness
projection mechanism, which guides the exploration towards the Pareto front, and the
technique ensuring diversity and uniformity of the final solutions. Research efforts in the field
of vector EAs are dedicated to identifying actual Pareto solutions and improving convergence
speed [28].

Global optimization often employs mathematical test functions (objective functions) to assess
an algorithm's capability for global search. Traditional local optimization methods prove
ineffective when the objective function is unknown to be unimodal or multimodal.
Consequently, the development of global optimization methodologies has been significant.
Global optimization has remained an active and cutting-edge research topic for several
decades due to the increasing complexity of real-world optimization problems that require
robust approaches [29].

Electromagnetic design projects often involve complex optimization problems that demand
efficient and robust optimization techniques. With advancements in computer performance
and efficiency, numerical techniques have become increasingly popular for addressing real-

17
浙江大学硕士学位论文

world engineering inverse problems. These techniques have enabled the treatment of
engineering inverse problems in a manner that surpasses the capabilities of analytical
methodologies. The choice of an optimal algorithm is crucial in the design process. In this
field, traditional deterministic optimization methodologies have gradually been replaced by
computational intelligence techniques [30].

Researchers have focused on developing new stochastic optimization algorithms rather than
attempting to address existing issues with deterministic or stochastic optimal techniques. This
is due to the fact that the objective function of an inverse problem is typically multimodal. As
a result, evolutionary algorithms (EAs) have emerged as the industry standard for tackling
global optimization challenges across various engineering domains. EAs excel in identifying
global optimum solutions that would be difficult to reach using traditional ideal
methodologies. Researchers have successfully adapted and applied several evolutionary
algorithms to address diverse inverse problems [31].

2.5 Metaheuristic Algorithms

Stochastic components of algorithms were previously known as "heuristics," Previously, the


stochastic components of algorithms were referred to as "heuristics." However, in the
literature, they are now commonly known as "metaheuristics." Following Fred Glover's
example, we will adopt the term "metaheuristics" to encompass all nature-inspired algorithms.
The prefix "meta" indicates that these algorithms operate at a higher level than ordinary
heuristics and yield superior performance. Glover introduced the term "metaheuristic" in his
influential study [32].

A metaheuristic is described as a "master strategy that guides and modifies other heuristics to
produce solutions that go beyond those typically achieved in the pursuit of local optimality."
This is what makes metaheuristic algorithms so effective: they balance between
randomization and local search. While there is no guarantee that these algorithms will find the
best solutions to challenging optimization problems within a reasonable time frame, they are
designed to perform well in the majority of cases. Most metaheuristic methods are suitable for
global optimization [33].

Metaheuristic algorithms possess two fundamental characteristics: exploitation and


exploration. Diversification involves searching for multiple solutions to gain a better

18
浙江大学硕士学位论文

understanding of the global search space, while intensification narrows down the search to a
specific local area where a promising solution has already been found. Achieving an
appropriate balance between intensification and diversification during solution selection is
crucial for optimizing the convergence rate of the algorithm. Selecting the best solutions
narrows down the search towards the optimum, while randomization helps to avoid local
optima and expands the search space. The perfect harmony between these two primary
components is necessary to achieve global optimality [34].

Metaheuristic algorithms, such as computational intelligence (CI), are extensively


documented in scientific literature. Instead of relying on conventional methods, nature-
inspired computational methodologies and approaches are employed to tackle complex real-
world problems. Swarm intelligence (SI) is a branch of evolutionary computation (EC) that
focuses on the collective behavior of decentralized, self-organized systems, both natural and
artificial. SI systems consist of numerous simple agents (individuals) that interact with each
other and their environment on a local scale. Biological processes often serve as inspiration
for these algorithms.

SI-based algorithms follow relatively simple rules and lack a centralized control framework
dictating how individual agents should behave. As a result of interactions among agents,
intelligent global behavior emerges, which is not exhibited by individual agents alone.

Several well-known examples of SI include ant colonies, bird flocks, animal herds, bacterial
growth, and schooling fish. Kennedy and Eberhart developed the particle swarm optimization
(PSO) algorithm based on the flocking behavior of birds [36]. Dorigo proposed the ant colony
optimization (ACO) algorithm inspired by the behavior of ant colonies [37]. Storm and Price
were the first to introduce the differential evolution (DE) algorithm [38]. Karaboga and
Basturk developed the artificial bee colony (ABC) algorithm by mimicking honey bee
foraging behavior [39]. The glow-worm swarm optimization (GSO) technique, conceived as
agents representing glow-worms using a luminosity measure called luciferin, was proposed by
Krishnan and Ghose [40]. Yang suggested the bat algorithm, inspired by microbat
echolocation [41].

19
浙江大学硕士学位论文

2.6 Optimization Techniques in a Variety of Applications

In addition, a variety of optimizations techniques have been effectively updated and put to use
by various researchers to address various problems. Description on various optimizations
techniques versions and their applications is provided in the following sections to help readers
with inverse and optimum design research.

In [42], the author proposes a new method for tackling inverse problems on fractional
derivative models. The real ant colony optimization technique is used in the proposed
strategy. In the proposed work, a multi-objective cuckoo search approach is used to optimize
the design of an axial pump's turbine. The issue may be solved using two different
approaches.

The suggested approach makes use of a Multi-Objective Spotted Hyena Optimizer. The non-
dominated Pareto optimum solutions are stored in a fixed-size archive. To imitate the social
and hunting behaviors of spotted hyenas, the roulette wheel mechanism is applied to choose
the most successful solutions from the archive. The developed approach is compared to
various previously established metaheuristic techniques and evaluated on standard benchmark
functions. The same approach is then validated on six constrained engineering design issues
to show that it may be used in real-world situations [43].

To keep the PSO algorithm fresh, scouts are utilized to come up with new food sources and
new formulae for the bees and observers to use. An age variable is introduced to determine
when to forget a food source. This determines how "exhausted" it is and when to stop. The
work is then tested on an inverse problem [44].

Solving the inverse damage problem with several damage sites and two autonomous goal
functions relying on natural frequencies and mode shapes was made simpler with the use of
Sun-Flower Optimization (SFO), a novel optimization approach. The inclusion of mode forms
to a multi-objective formulation improves the ability to correctly identify the damage in terms
of its location and severity, based on the numerical results. When comparison to other
algorithms, the multi-object SFO technique gave significantly superior results [45].

In order to save computation time, the author implemented genetic algorithm and applied a
physics-based fast coarse model in the global optimization techniques. Coarse model
computations are rapid, and only a few fine model assessments are required to assure the

20
浙江大学硕士学位论文

accuracy and the proposed approach has been checked to optimize TEAM workshop problem
25 [46].

An improved cross-entropy approach for global optimizations of inverse problems with


continuous variables has been presented. To increase the speed of convergences,
modifications to the computation and iterative process are provided, and the approach is
subsequently used to solve TEAM workshop problem 22 [47].

The author proposes a three-step search procedure, consisting of intensification,


diversification, and refinement, to solve inverse issues using a new search strategy. To
expedite the search without compromising accuracy, two "new point generation techniques"
and a "dynamic parameters update" rule may be utilized. The effectiveness of the method was
then evaluated using TEAM workshop tasks 22 and 25. [48].

The Seagull Optimization Algorithm (SOA), a revolutionary bio-inspired technique, has been
utilized to solve computationally demanding problems. The migrating and attacking behaviors
of a seagull in nature served as the major motivation for this approach. For focus in a precise
search planetary, the behaviors are arithmetically formulated and executed [49]. The
Arithmetic Optimization Algorithm (AOA) is a novel metaheuristic approach that practices
the dissemination behavior of the key arithmetic mathematical operators [50].

Multi-Objective Ant Lion Optimizer is a new edition of ant lion optimizer. To begin, a
repository is used to hold non-dominated Pareto optimum solutions found thus far. Then, to
select solutions from this store, the roulette wheel approach is utilized. It examines the
solutions' distribution, which is used to direct ants to potential multi-objective exploration
regions [51].

An algorithm called the Equilibrium Optimizer (EO) is used to figure out both equilibrium
and dynamic states [52]. Nature-inspired meta-heuristic optimization algorithm WOA (Whale
Optimization Algorithm) has been designed that mimics the social behavior of humpback
whales. The algorithm was influenced by the bubble-net hunting strategy [53]. The author of
[54], suggests using a gradient-based optimization (GBO), which is based on Newton's
gradient method. Local escaping operator and the gradient search rule are used to explore the
space, together with the use of a collection of vectors. The gradient search rule employs a
gradient-based method in order to find superior search space locations. Proposal GBO is
located such that it doesn't get trapped in the middle of local bests.

21
浙江大学硕士学位论文

A novel optimization approach, termed the Moth-Flame Optimization (MFO) algorithm, has
been established. The major motivator for this optimizer is the transverse orientation
navigation approach used by moths in nature. Moths fly at night, simply stabilizing their angle
with the moon that is a remarkably well-organized technique for travelling large spaces in a
straight line. On the other hand, these exquisite insects are locked in a pointless spiral journey
around artificial lights. This work uses a mathematical model to optimize this behavior [55].

Emperor Penguins Optimization is a novel algorithm that mimics the mysterious behavior of
emperor penguins (Aptenodytes forsteri). One of the most important functions of the EPO is
to draw a huddle boundary, measure the temperature in the area, determine the distance, and
identify the most effective mover. The proposed technique is tested on six real-life confined
and one unconstrained engineering problem [56].

In [57], the author presents the Butterfly Optimization Method (BOA), a new nature-inspired
algorithm for solving real-world issues that are complex. The BOA solves global optimization
issues by mimicking the feeding, hunting, and mating behavior of butterflies. The paradigm is
mostly inspired by butterflies' foraging approach, which involves using their sense of smell to
locate nectar or mating partners. To tackle multi-objective engineering design issues, an
innovative and efficient multi-objective artificial algae algorithm is described and presented.
The technique is based on the artificial algae algorithm algorithm's search strategy [58].

The Grey Wolf Optimizer (GWO), a relatively recent algorithm motivated by grey wolves'
social hierarchy and hunting activity, is a very effective method for tackling complex
engineering issues. There are only half of the iterations in the original GWO that are used for
exploitation and half for exploration, which doesn't take into account how important it is to
balance the two needs to get an accurate approximation of the global best [59].

Crow cognitive behavior is utilized to develop a crow search algorithm (CSA). The approach
is one based on population, on the assumption that crows bury spare food and reclaim it when
it is desirable. Six constrained engineering design problems are solved using CSA [60].

The novel differential evolution (NDE) is a new way to solve constraint engineering
problems. The new triangular mutation principle is used in the new NDE that is being
proposed. A convex combination vector of the three randomly chosen vectors that make up a
triplet is used. It also takes into account the difference vectors between the three vectors.
When a new addition is made to the triangular mutation operator [61], a new optimization is
used to find a better balance between global and local searches.

22
浙江大学硕士学位论文

Sailfish Optimizer (SFO), a nature-inspired metaheuristic optimization algorithm, has been


said to be able to solve many scientific and engineering problems because of its versatility and
ease of use. SFO was influenced by a bunch of sailfish on the hunt. It uses two types of
advice: the sailfish species to make the search for the best so far more intense, and the sardine
species to widen the search area [62].

Ant colony optimization is used to ensure that AC electricity connected to the grid does not
experience any further problems. This approach is used to regulate the inverter gating signal,
which transforms DC power from renewable energy to AC. Super conducting magnetic
energy storage can be used to incorporate solar panels and wind turbines into a micro-grid. In
terms of efficiency and distortions, SMES surpasses battery systems [63]. In [64], an
improved cross-entropy approach for global optimizations of inverse problems with
continuous variables is reported and applied to the SMES device.

A novel quantum-inspired evolutionary method has been designed for electromagnetic device
design optimizations. A new information sharing approach is incorporated in the proposed
algorithm. The presented approach is tested on an engineering inverse problem, and the
results are encouraging [65]. The author provides a bat algorithm in this work. The goal is to
optimize the brushless DC wheel motor issues for the multi-objective version of the mono-
objective optimization problems. Furthermore, the findings are compared to other
optimization methodologies, demonstrating the practicality of this newly introduced technique
for high nonlinear problems in electromagnetics [66].

The author hybridized flock-of-starling’s optimization and the bacteriological chemotaxis


algorithm, and new methods for solving multimodal optimization problems with acceptable
computing costs have been developed for the optimization of the eight-parameter version of
the TEAM problem 22 [67].

It has been proposed that a quantum-inspired evolutionary method for addressing inverse
problems and improving convergence speed without sacrificing variety could be employed. A
new concept of global information sharing is incorporated. The purpose of this study is to
establish a good balance between exploitation and exploration in the search space, as well as
to develop a novel migration strategy and formula for dynamically adjusting the rotation angle
[68].

An improved particle swarm optimization technique known as QPSO is described and


evaluated. Numerical findings reveal that the suggested QPSO can achieve superior optimal

23
浙江大学硕士学位论文

outcomes with a low processing cost. In addition, the proposed approach only involves the
tuning of one parameter. For inverse problem optimization experiments, this study offers a
simple and efficient global optimizer [69].

In [70], a newly proposed, modified multi-strategy particle swarm approach is used for device
structure optimization. Following that is an introduction to device concepts, followed by an
examination of electromagnetic and permanent magnetic forces. The device optimization aim
functions include the minimal electromagnetic loss and investment, as well as the largest
load-reduction degree within the authorized scope. The study then provides a multi-start
strategy for increasing swarm diversity by identifying weaknesses in the standard PSO
algorithm.

In [71], the PSO process is adopted to improve the parameters of the proportional-integral
controller to make the system more durable. Experimental results show that the suggested
technique of regulating the drive system maximizes the output torque and speed range, whilst
the PSO-PI controller minimizes torque ripple and improves control stability. The
implementation of a multi-objective cuckoo search approach for the turbo-machinery design
optimization of an axial pump is described by the authors in [72]. The goal of the optimization
challenge is to maximize overall efficacy while reducing the pump's necessary significant net
pressure level. This optimization method is applied to a set of imposed flowrates, optimizing
for each discretized radius in between rotor's hub and tip.

The GPSO method' adaptive and dynamic parameters will strike a balance between
exploration and exploitation search abilities, while improved parameters will manage
population diversity throughout the selection process' final stages [73].

To improve the performance of the conventional PSO, a unique modified PSO approach is
used to dynamically adjust the number of particles. You may compare how much the global
best costs in each iteration to the global best from the prior iteration using this new approach.
When the cost values diverge, the proposed approach will move to the exploration stage while
maintaining a constant number of particles within. The results of the experiments showed that
the predicted PSO outperformed other well-designed variants in terms of performance. In
order to achieve the user-assigned tolerance criteria for cost disparities, a modified approach
may only be utilized at this stage. The method is used to optimize the design of an inside
permanent magnet synchronous motor in order to minimize the total harmonic distortion of
the back electromotive force [74].

24
浙江大学硕士学位论文

In [75], a discrete PSO optimizer has been reported, it uses two multi-objective PSO versions
to resolve an optimization problem. As a result, the patch antenna design necessitates the use
of discrete PSOs such as a binary and Boolean PSOs with velocity mutations. Furthermore,
the array thinning design challenges may be examined with a Binary PSO and compared to
those of other transfer functions. PSO-based algorithms (QPSO, DPSO, and SPSO) and a
domain-shrinking technique for electromagnetic device optimizations were introduced by the
author. TEAM problem 22's electromagnetic device is optimized using this method [76].

A modified QPSO has been proposed in [77] and several enhancements have been made
based on the core principle of the PSO algorithm and quantum theory. A random particle will
be selected to participate in the current search area. There are two additional factors that
improve the efficiency of the global search: a mutation technique for the mean best location
and an enhancement factor to avoid premature convergence. Further, several approaches for
parameter updating to balance exploration and exploitation searches are provided.
Experimentations have been accompanied on the inverse problem and well-known
multimodal functions.

The authors in [78] proposed a robust optimal algorithm that allows for global optimum
tracking and manufacturing uncertainty factor handling in the design of high speed permanent
magnet motors. Both objectives contain a novel robustness criterion that considers the impact
of construction tooling uncertainties on design variables. The employment of an Adaptive
network-based Fuzzy Inference System in place of time-consuming finite element studies
resulted in a computationally fast and robust electric machine design technique. The method
was tested by designing a finite element high-speed motor with finite elements.

The author of [79] proposes a robust optimization technique, measures and considers the
interpolation uncertainty of the multi-fidelity meta-model as well as the uncertainty of the
design variables. To demonstrate the efficacy and advantages of the approach, two numerical
examples and an optimization problem for the building of a long-cylinder pressure vessel are
examined. The results show that the proposed approach may produce an optimal solution even
when the unknown variables in the test cases are disturbed.

For the economic dispatch of renewables in active distribution networks, the author proposes
a novel data-adaptive, resilient optimization technique. The suggested method incorporates a
scenario-generation mechanism as well as two levels optimizer. In place of the standard
uncertainty set, a few extreme alternatives selected from past data are used to minimize

25
浙江大学硕士学位论文

conservatism. The provided extreme-scenario selection method is based on correlations and


may be applied to a variety of historical data sets [80].

A novel solution to a benchmark TEAM problem for multi-objective optimization has been
published, and it concerns the optimum design of magnetic devices. This problem belongs to
the TEAM problems, and its key distinctive feature is its multi-objective paradigm. The
chaotic optimization algorithm, based on swarm intelligence, is employed to solve this
problem [81]. For analyzing multimodal multi-objective problems, the author presents a
modified particle swarm optimization approach in [82]. The global learning strategy is
replaced with a dynamic neighborhood-based learning method, which promotes population
diversity. Meanwhile, PSO's performance is being improved through the implementation of a
competitive offering approach.

The author of [83] proposes a modified multi-objective optimization technique by comparing


the particle swarm optimization algorithm to the real-coded elitist non-dominated sorting
genetic algorithm and other related existing operators.

A novel cooperative PSO approach with a reference point-based likelihood strategy has been
proposed to grip dynamic multi-objective optimization difficulties. Using a novel learning
method, multiple swarms work together to approximate the whole Pareto front in dynamic
conditions. A new reference point-based prediction approach that relies on Pareto front
subparts is used to shift out-of-date particles as the environment changes. Scalable dynamic
problem test suites with a range of objectives and change severity levels have been used to
validate the approach given there [84].

The hybridized particle swarm optimization–Differential evolution optimizer is introduced in


[85]. A new self-adaptive PSO is developed to direct particle motions in the proposed hybrid
PSO. A self-adaptive approach for adaptively updating the three primary control parameters
of particles in a self-adaptive PSO is presented with the goal of well balancing global and
local search capabilities. Because the performance of PSO is strongly reliant on its
convergence, the convergence of self-adaptive PSO is explored analytically in the work, and a
convergence-guaranteed parameter selection procedure for self-adaptive PSOs is provided.
Following that, in order to avoid the potential stagnation issue, a modified self-adaptive
differential evolution is described to develop the personal best locations of particles in the
suggested hybrid PSO.

26
浙江大学硕士学位论文

To establish a stronger search approach [86], differential evolution, simulated annealing, and
particle swarm optimization are integrated. First, the temperature idea of simulated annealing
is used to balance the hybridized algorithm's exploration and exploitation capabilities. The
DE's mutation operator is then employed to boost the algorithm's exploration capabilities in
order to escape from the local minimums. Following that, DE's mutation operator has been
updated so that previous experiences may be utilized to make stronger mutations. Finally, the
temperature affects the PSO particles' propensity to their local optima or the global optima,
which balances the algorithms random and greed search.

The temperature affects the algorithm's behavior in a way that the random search is more
significant at first and the greed search becomes more important as the temperature dropped.
In the proposed study of [87], the PSO method is being used to classical correlation
electromagnetic analysis to address multidimensional applications, and a mutation operator
has been introduced to the optimal algorithm to improve the results.

A mutualism mechanism was employed to construct a hybrid metaheuristic algorithm based


on butterfly and flower pollination. For beginning, the flower pollination algorithm has a lot
of exploration potential; moreover, the hybrid butterfly optimization algorithm and the flower
pollination method considerably increase the program's exploration potential. Both the
algorithm's exploitation potential and its convergence speed are boosted during the mutualism
phase. Finally, to improve the algorithm's capacity to maintain a good balance between
exploration and exploitation, the adaptive switching probability is doubled [88]. A novel sort
of pigeon-inspired optimization approach based on Cauchy distribution and Gaussian
distribution is proposed to handle electromagnetic inverse issues, named Cauchy-Gaussian
pigeon inspired optimization [89].

2.7 Swarm Intelligence

In recent decades, there has been a tendency in the research world to use metaheuristic
optimization methods to handle complex optimization issues. Electrical, industrial,
mechanical, software engineering, neural networks, and data mining are some of the most
famous applications of metaheuristic algorithms, as are some challenges from location theory
[91–95].

Swarm Intelligence Swarm intelligence (SI) is a method based on the collection of behavioral
and decentralized self-organized systems. This could be natural or artificial. Many examples

27
浙江大学硕士学位论文

of the natural case of SI are found among the variants of ant colonies, fish schooling, bird
flocking, bees warming, and so on. Besides the multi-robot systems, some computer programs
for attacking optimization problems and data analysis problems make good examples of
human artifacts of SI [60]. The two best (and most successful) swarm intelligence techniques
are the particle swarm optimization (PSO) and the ant colony optimization (ACO). In PSO,
each particle sails through a multidimensional search space and then adjusts its position at
every step using previous experience and that of peers to achieve a position at an optimum
solution shared by the entire population. Therefore, PSO can be considered a member of
swarm intelligence.

There are different metaheuristic procedures accessible to discover the global best solution,
comprising human-based algorithms, physics-based algorithms, and swarm intelligence
algorithms, such as ant colony optimization, simulated annealing algorithms, glow-worm
swarm optimization, differential evolution algorithms, the cuckoo search algorithm, an
artificial bee colony, a genetic algorithm, and particle swarm optimization. The most exciting
and widely used metaheuristic algorithms are swarm-intelligence algorithms, which are based
on the collective intelligence of flocks of birds, colonies of bees, ants, termites, and so on.
Swarm-intelligence algorithms are efficient because they use commonly shared knowledge
among several agents, allowing self-organization, coevolution, and learning across cycles to
help in the generation of high-quality outputs.

Algorithms for swarm intelligence (SI) were developed that employ various search processes
motivated by the supportive activities of spatially explicit and self-organized entities such as
animals and insects [100]. In the absence of external assistance, self-planning is described as a
system's capacity to advance its representations or workings into a logical shape. SI
algorithms with a high global search capability have therefore been effectively employed to
tackle a variety of engineering design challenges. Despite this, no metaheuristic algorithm
exists that can provide a complete solution to all engineering design challenges.

However, not all swarm-intelligence algorithms are efficient; a few approaches have shown to
be incredibly efficient and have thus evolved as powerful tools for addressing real-world
issues [96]. Several of the most effective and thoroughly researched examples include particle
swarm optimization [180], artificial bee colonies [97], the firefly algorithm [98], and ant
colony optimization [99].

28
浙江大学硕士学位论文

2.8 Conclusion

In this chapter, a comprehensive theoretical foundation is presented, including a conceptual


overview of the standard PSO algorithm. The chapter also explores various optimization
techniques applied in different applications. The following section of the thesis offers a
detailed explanation of the modifications made to the traditional PSO algorithm.

Like other evolutionary algorithms, PSO has emerged as a valuable tool for optimization and
solving complex problems. It is an intriguing and intelligent computational technique capable
of efficiently identifying global minima and maxima, even in the presence of multimodal
functions. PSO's practical applications are diverse, thanks to its simple and versatile principles
that can be readily applied across a wide range of fields.

CHAPTER 3

PSO ALGORITHM

3.1 Overview

PSO is the most recent and simplest of these techniques. Each particle communicates with
others throughout the PSO search process in order to extend the search region or space. The
PSO optimization algorithm iteratively optimizes a problem, starting with a set or population
of feasible solutions, referred to in this perspective as a swarm of particles, in which each
particle knows both the global best position within the swarm as well as its personal best
position (and its fitness cost) indicated thus far during the search. The particles wander
randomly in the search area in an iterative process until the entire swarm converges on the
global minimum. The PSO is organized into three parameters: one for control and two for
learning. Thus, every parameter is important in the search process.

The PSO has been around for about a decade, thanks to constructive research in allied fields,
particularly engineering optimization, that went on for several years before that. This is a
relatively short time in comparison to other natural computing models like evolutionary

29
浙江大学硕士学位论文

computation and artificial neural networks. PSO, on the other hand, has gained a lot of
traction among researchers in a short amount of time, and it has been shown to perform well
in a variety of applications.

The PSO is a population-based search technique that starts with a population of random
solutions called particles, as do the other evolutionary computation methods. PSO is unique
among computational intelligence systems in that it gives each particle a velocity. Because of
this, particle speeds are continually changing as they move across the search space. As a
result, as the search continues, the particles choose to fly to search locations with ever better
results.

The idea was developed in 1995 and has received various enhancements. The approach's
research has resulted in multiple versions targeted at satisfying a range of demands, as well as
unique applications in a number of sectors and theoretical research on the effects of various
aspects. This chapter examines the PSO theory in depth and provides an outline of its history
and evolution. This section also explains the current status of research and practice in
algorithm design, parameter selection, topological structure, etc.

The PSO method was described in [102] as a stochastic optimization strategy based on the
notion of swarming. Insects, animals, birds, and fish are among the socially active creatures
that the PSO algorithm can mimic. Food is discovered collectively in these swarms, and each
member of the swarm modifies its search behavior based on personal and other members'
learning experiences. The PSO algorithm's core design concept is significantly tied to two
studies: A time-evolving algorithm is one example. PSO simultaneously explores the solution
space for an optimal objective function using a swarm mode akin to an evolutionary method.
The other is the study of artificial systems having life-like properties, sometimes known as
"artificial life."

Authors came up with five main ideas when he was investigating the behavior of social
animals and also how to make swarms of artificial life systems that work together by
computer [103].

(a) Closeness: It should be able to do simple calculations in space and time.

(b) Quality: when the quality of the environment shifts, the swarm must be able to recognize
and respond to these changes.

30
浙江大学硕士学位论文

(c) Diverse response: In order to receive the resources, the swarm must respond in a variety of
ways.

(d) Stability: The swarm's behavior need not shift in response to changes in its surroundings.

(e) Ability: to alter behavior mode as warranted; the swarm must be able to adapt to new
situations.

Recall that the fifth and fourth principles are in direct conflict. These five ideas, which
encompass the most important aspects of artificial life systems, regulate the swarm artificial
life system. Particle location and velocity in PSO can be altered in reaction to environmental
changes, such as those that satisfy proximity and quality standards. The swarm's mobility,
which is constantly seeking the best resolution in the planetary search, is unaffected by PSO.
PSO components can adjust to changes in their surroundings while continuing to move
continuously through the search space. There are no issues with this technique when viewed
in the context of these five guiding principles.

3.2 History

A vital early model used to track the development of the PSO process's production
background is the Boid model. A model of bird behavior serves as the foundation for the PSO
algorithm [104]. The most basic model is as follows: for each bird, starting speeds and
random locations in the Cartesian coordinate system are assigned. Finally, use the "adjacent
proximity speed matching rule," whereby each member moves at the same speed as their next
neighbor. Iteration in similar ways ensures that all points reach the same speed as quickly as
possible. Since this model is very basic and not representative of real-world events, the
velocity component includes a random variable. At each time step, a random variable is added
to each speed, resulting in a more realistic simulation than it would otherwise be. Heppner
created a "cornfield model" to replicate the foraging behavior of a flock of birds. A "cornfield
model," which represents the location of food, was created by randomly placing birds on the
plane. They followed the following principles of movement in the direction of food discovery
[105].

Suppose the location of the cornfield is ( x o , y o ), and the positions and velocities of the birds
are ( x , y ) and ( v x , v y ). The distance amongst the current location and a neighboring cornfield
is used to calculate the current place and speed. The performance increases as one moves

31
浙江大学硕士学位论文

away from the "cornfield." Contrary to common assumption, the situation has deteriorated.
Assume that each bird can remember its best potential location, which we'll refer to as pbest in
this scenario. Where v is a velocity adjusting constant in this example, and rand is a random
value in the range[ 0 , 1 ].

The subsequent principles dictate how to adjust the velocity of an item:

where

x > Pbestx , v x =v x – rand × a , otherwise , v x =v x +rand ×a (3.1)

and

where

y > Pbesty , v y =v y – rand × a , otherwise , v y =v y +rand × a (3.2)

Suppose that there is a method in which the swarm can communicate and that each of the
swarm members can remember where they've found the best spot so far. Once you've
modified the item's velocity in accordance with the preceding regulations, the following rules
must be followed:

if x >Gbestx , v x =v x – rand × b , otherwise , v x =v x +rand ×b (3.3)

if y >Gbesty , v y =v y – rand × b , otherwise , v y =v y +rand × b (3.4)

According to the simulation findings, if a/b is very large, the "cornfield" will quickly attract
all individuals; but, if the "cornfield" is small and unstable, the particles will collect around it
slowly and unevenly. The usage of this rudimentary simulation demonstrates the swarm's
capacity to quickly select the appropriate location. Based on this strategy, Eberhard and
Kennedy created an approach of evolutionary optimization [106].

Finally, they devised the following solution to the problem:

v x =v x +2∗rand∗( Pbestx −x ) +2∗rand∗( Gbestx −x ) (3.5)

x=x +v x
(3.6)

This method is known as particle swarm optimization because it reduces each human to a
particle with no mass or volume, only velocity and position. The PSO algorithm may be
summarized in this way. Individuals are called "particles" in PSO because they are all

32
浙江大学硕士学位论文

plausible solutions to the optimal problem in the dimensional "D" solution space, and the
algorithm can remember the ideal position of the entire swarm as well as its own velocity. The
velocity of each dimension is modified in each generation to compute the particle's new
position by combining information from all of the particles. At some point in the
multidimensional search space, each particle will achieve equilibrium or an optimum
situation. The goal functions create new connections between the various components of the
problem. A significant body of empirical data supports this algorithm's usefulness as an
optimization tool.

The PSO may be expressed mathematically in a continuous space coordinate system as


follows.

Assume that the swarm's size equals to “N”, and the position of each particle in the
dimensional “D” space is X i =(x i 1 , x i 2 , · ·· , xid , ·· · , x iD ), velocity vector is
V i=(v i 1 , v i 2 , · ·· , v id , ·· · , v iD), the optimal location of an individual is
Pi=( pi 1 , pi 2 ,· ·· , pid ,· · ·, piD ), and position the optimal swarm P g=( p g 1 , p g 2 , ·· · , p gd , ·· · , p gD ).

The update formula for the individual's finding the optimal in the first variant of the PSO
algorithm is as follows, with no loss of generality:

d d d
pi ,t +1={x i ,t +1 ,if f ( X i , t+1 ) < f (Pi , t ) pi , t (3.7)

The ideal place of the swarm is the sum of the optimal locations of all the individuals. The
formula for updating velocity and location is as follows:

v di, t +1=v id, t +c 1∗rand∗( p di ,t −xid, t ) + c 2∗rand∗( pdg ,t −x di ,t ) (3.8)

d d d
x i ,t +1=x i ,t + xi , t +1
(3.9)

The original PSO method was not particularly effective at solving optimization problems,
which is why a new version of the PSO algorithm was quickly developed after the first one.
The speed update formula was updated to include an inertia weight, resulting in the following
revised speed update equation:

v di, t +1=W ∗v di , t +c 1∗rand∗( pid, t−x di , t ) + c2∗rand∗( pdg ,t −xid, t ) (3.10)

Despite having the same degree of complexity as the original one, this new approach performs
significantly better; and as a result, it has gained widespread adoption. Canonical PSO and

33
浙江大学硕士学位论文

original PSO are frequently referred to as two distinct algorithms. In [107], reported the
convergence behavior of the PSO algorithm to design a modification of the PSO technique
with a constriction factor χ that guarantees convergence and also increases the rate of
convergences. As a result, the velocity updating formula has been modified to:

d
v i, t +1=x ¿ (3.11)

Consequently, the iteration formulae (3.10) do not differ significantly in any fundamental way
(3.11). When all of the correct parameters are chosen, the formulae are identical.

There are two modifications of the PSO algorithm, the global and the local ones. It is the
particle's Pbest position and Gbest position that are tracked in the global version. Both are
considered the best possible positions. Because of this, the particle doesn’t follow the swarm's
ideal position Gbest in the local version; instead, it records the optimal position n best of all the
particles in its topological neighborhood.

The velocity updated formula (3.5) for the local version is:

v di, t +1=W ∗v di , t +c 1∗rand∗( pid, t−x di , t ) + c2∗rand∗( pdl ,t −x di ,t ) (3.12)

Where pl is the finest spot to be in the immediate vicinity. For each generation, the particle
generation mechanism is represented in Figure. 3.1. From a sociological viewpoint, we can
observe that the particle's previous velocity influences the first component of the velocity
update formula.

34
浙江大学硕士学位论文

Figure 3.1. The PSO algorithm’s Flow chart

The expression W is the "weight of inertia" means that the particle is autonomous in its
current position and moves with its own speed as a function of inertia. Particles have a
"cognitive" component that defines how far they should be from their optimal positions from
their current location. It refers to the motion of a particle resulting from its own reasoning or
experiment.

As a result, the c_1 parameter is assigned the cognitive learning factor. The "social"
component is determined by the distance of the particle from the global (or local) ideal
location of the swarm. Because of their shared knowledge and collaboration, particles in a

35
浙江大学硕士学位论文

swarm can learn from each other's experiences. To model the transmission of good particles
through cognition, the social learning factor c_2 is used.

The PSO algorithm is a technique of stochastic and parallel optimization. In a nutshell, its
benefits are as follows: There are no optimization functions for differential, derivative, or
continuous functions; the technique is simple and easy to apply via programming; and the
convergence rate is fast. As a result, there are certain disadvantages.

A function is more likely to fall into one of the many local extremes it contains and cannot
provide the desired results. These traits are present in optimized functions, but as the diversity
of the particles rapidly decreases, convergence occurs too early. These two elements typically
go hand in hand. The PSO algorithm is unable to produce suitable results due to a lack of
cooperation from other search methodologies. The issue is that the “PSO” method doesn’t
make the best usage of the data found throughout the computing phase. Rather, each
generation of the algorithm uses only the input from the swarm and independent optima for
each iteration. Although a global search is feasible, PSO does not guarantee convergence to a
global optimum solution. As of this writing, meta-heuristic bionic optimization algorithms
such as PSO lack sufficient theoretical grounds. This approach simplifies and simulates
swarm searches, however it does not explain or describe the range of applications for which
this algorithm is acceptable based on the assumption of why it is. As a result, the PSO
approach is well-suited to a certain class of optimization problem: high-dimensional, low-
precision optimization problems.

3.3 Expression of theoretical analysis

The theory of the PSO method, i.e., how particles communicate with each other and why
something works well for many optimization problems but not for others, has recently
attracted a lot of attention. There are three key research disciplines that can be separated into:
the trajectory of a single particle, the question of convergence, and the development and time-
dependent distribution of the entire particle system, as examples.

After a series of simulations, the first simplified analysis of particle behavior [108] discovered
different particle trajectories. It has been postulated that in a simple one-dimensional PSO
mechanism, a particle moves along a specified route on a sine wave with its randomly
determined amplitude and frequency. The simplified PSO approach was the subject of the
first theoretical investigation. They could only look at the simple PSO model because their

36
浙江大学硕士学位论文

research did not incorporate inertia weights. Because P_id and P_gd were constantly
changing, the particle's path had several unique amplitudes and frequencies. As a result, the
whole trend seemed to be in chaos. As a result, their findings had a significantly reduced
impact.

The PSO approach employs a constant inertia weight and uniformly distributed random
numbers c 1 and c 2. Because the inertia weight, as well as c 1 and c 2, are random variables,
how would the first-and second-order stability regions change if the inertia weight was also a
random variable? The parameters (ω ,φ ), where φ=(a g +al )/2, and c 1 and c 2 were uniform
distributions in the range [0 , ag ] and [0 , al ], in first-order stability studies, mean trajectories
were tested for stability [109]. Particle swarm dynamics and PSO convergence qualities may
be better understood and explained using a probabilistic structural analysis, which involves
higher-order moments [110].

The authors of [111], propose an alternate model of the PSO dynamics employing a closed-
loop control system and analyze the system's stabilization behavior using root locus and Jury's
test approach.

The literature [112] also looked at convergence and parameter selection. A study of PSO
social variations (a l=0) and fully informed particle swarms was also conducted. The
estimation of the three PSO variables: global and local acceleration factors, and inertia weight
sets this work apart from others. Most of the PSO algorithms have an upper limit on particle
trajectories' second-order stability zones, and they gave an analytical examination of that
limit. In terms of algorithm performance, tuning the parameters of the PSO algorithm to the
USL curve produces the best results.

Deterministic PSO was also investigated, and the author discovered the regions in the
parameter space where stability could be assured. The study's authors, on the other hand,
admitted that their conclusions were restricted since they failed to account for PSO's
stochastic character. Similar work on the continuous variation of the PSO algorithm can be
found in [113].

The author employed the Lyapunov stability analysis and the passive system concept to
investigate the stability of particle dynamics. In this investigation, stable conditions were
discovered since no factors were deemed to be non-random. It was a nonlinear feedback
control system for particle dynamics based on random particle dynamics. The feedback loop

37
浙江大学硕士学位论文

of such a system has a time-varying gain as well as deterministic linear and nonlinear
components. Despite the fact that it takes into consideration random components, its stability
analysis was centered on the optimal feasible position [114].

As a result, the conclusion cannot be applied to particles that are not optimal right away. Even
though PSO can converge, it cannot guarantee that the solution it gets is the optimal solution,
even if it can guarantee that it is the local optimum and the swarm can discover it.

In [115], the author proposed a PSO approach that may guarantee the procedure's
convergence. The global optimal particle employed a new updating equation that generated a
random exploration around the global optimal position, while the other particles continued to
use their previous equations to update their states. Although convergence on the local-optimal
resolution could be assured, its performance in multimodal problems was inferior to that of
the standard PSO technique. As a result, increasing population diversity was thought to be an
efficient method for avoiding local maxima that had already been shown to be impacted by
the swarm's early convergence towards them [116].

On the other hand, increasing swarm diversity delays the swarm's convergence to its optimal
solution. This fact is well known since it was proven in [117] that no algorithm can
outperform all others in every type of problem. The objective of optimization algorithm
performance trials should not be to develop a generic function optimizer but rather a general
issue solver capable of performing well on a variety of balanced, realistic benchmark
situations [118].

The author investigates distinct PSO dynamics in depth, comparing them to their continuous
analogues. In this work, they investigate behaviors such as oscillation, trajectory attenuation,
and center attraction potential, which may be used to choose feasible PSO parameters and
explain the performance of certain parameter sets described in the literature [119].

3.4 Algorithm Design

3.4.1 The choice strategy

A multi-phase particle swarm optimization method in which particles are focused on


momentary search objectives at various stages, and these momentary targets allow particles to
migrate towards and away from their own or the global best position [120].

38
浙江大学硕士学位论文

In [121], each particle's position and dimension were learned from other dimensions picked at
random. They replaced the original with the new one for those who preferred it. New PSO
algorithms have been developed, and the roulette selection approach has been used to
calculate the so that all people have an opportunity to lead the search direction in the early
stages of evolution and prevent premature convergence. An orthogonal learning method was
employed to obtain efficient exemplars in the introduction of an orthogonal learning PSO
[122].

If a fuzzy measure is utilized, many particles having the highest fitness levels in each nearby
neighborhood may influence other particles. In contrast to the original PSO technique,
particles in a class scheme migrate away from the worst location rather than toward the best
one. Instead of utilizing the algorithm's ideal position, they advised selecting the worst
position [123]. In addition, a unique approach termed "repel operator" was presented in [124],
which used knowledge about an individual's ideal placement and the optimal position of a
group of swarm members to repel them. Because it kept track of the particles' worst
placements, both for each individual and for the entire swarm, the swarm was able to quickly
attain its ideal location.

3.4.2 Modification in particle’s updating formula

One of the best-known features of PSO versions is their broader search scope. Most
researchers employ chaotic sequences to adjust particle placements, in which the chaotic
character of the system causes the particles to exhaustively explore for solutions [125]. They
used chaotic sequences and Gaussian distributions to randomize the swarm's cognitive and
social activities [126].

A virtual quadratic objective function based on personal and global optimal solutions may be
exploited using the chaotic PSO. The steepest descent technique with a perturbation is used in
this concept, which uses a chaotic system with a perturbation. The function contains two
global minima, one at the particle's and the other at the particle’s [127].

The particle locations in the Bare Bones PSO technique [128] are changed using a Gaussian
distribution. Because many foragers and roaming animals followed a Levy distribution of
steps, optimization methods were aided. To generate random samples for the particle
dynamics in PSO, a Levy distribution was used. To evaluate its performance, benchmark
problems were used; the Levy PSO outperformed a conventional PSO or a comparable

39
浙江大学硕士学位论文

Gaussian distribution model [129]. Each particle in the speed-updating equation was given the
capacity to store information in its memory. They introduced an acceleration component to
the PSO algorithm [130], transforming it from a second-order stochastic system to a third-
order stochastic system. An updated formula for position and speed, as well as the parameter
"age," were developed to increase the PSO algorithm's global search capabilities [131].

3.4.3 Modifying velocity update strategy

Despite the fact that PSO efficiency has risen dramatically over the previous several decades,
selecting a suitable velocity updating technique and parameters remains an essential research
area. The swarm's developer [132] developed a new version of the particle swarm model that
included two distinct sorts of agents, "explorers" and "settlers." The swarm's current distance
from the ideal site may be used to dynamically change the velocities of the particles at every
time step. A uniform distribution of random values may have an effect on particle movement,
with tremendous exploration opportunities in the velocity updating approach.

In [133], the author presented a self-adaptive PSO using multiple velocity approaches to
increase the performance of PSOs. In the SAPSO-MVS, novel control parameters in the entire
evolution approach are employed by employing an innovative velocity update mechanism to
optimize the balance between the PSO algorithm's exploration and exploitation capabilities,
eliminating the need to manually alter PSO parameters.

When presenting Crazy PSO in [134], the author randomized particle velocity within stated
limits. Certain particles, dubbed "crazy particles," had their speeds randomly assigned based
on a predefined probability in order to preserve variation for global search and increase
convergence. Unfortunately, just a few studies produced data that indicated a predefined
chance of being insane. The author of [135] proposed FDR-PSO, which employs unique
velocity updating equations to reset the velocity of every particle based on the fitness distance
ratio. By using Li's self-learning PSO, one may automatically adjust the velocity update
technique of the evolution approach [136]. In switching PSO, Lu proposed a mode-dependent
velocity update equation with Markovian switching factors [137].

40
浙江大学硕士学位论文

3.4.4 Hybridization of PSO

Hybridization has two primary goals. The first is to increase the diversity and minimize the
early deterioration. The second is to expand the PSO algorithm's search capabilities.
Numerous models have been studied [138] in order to broaden the scope of PSO searches.
The PSO algorithm is given genetic processes including selection, crossover, and mutation to
boost its variety and capacity to escape local minima. In the unique "CSPSO" method, each
particle in the population is directly expressed by its best value, which differs from PSO itself
and modifications in that each particle is simply represented by [139]. Particle swarm
optimization and differential evolution have been combined by first splitting each method and
adding the variations of each process as alternative choices of the associated module. After
comparing the inner structures of PSO and DE, we hybridize the procedures by constructing
two populations with variation operators of PSO and DE, respectively, and choosing
individuals out of those two data sets. It's a new hybridization dubbed PSODE that
incorporates the most recent variations from both sides and, more crucially, generates a large
number of previously unknown swarm algorithms by varying the components in the
hybridization [140].

3.5 Parameter Selection

Inertia weight, social and cognitive characteristics, and population initialization are only a few
of the crucial variables that make up the PSO algorithm.

3.5.1 The Inertia Weight

Due to its importance in PSO algorithm performance, inertia weight is attracting a lot of
attention at the moment. Inertia weight techniques include random inertia weight, linearly
declining inertia weight, nonlinear inertia coefficient, chaotically dropping inertia weight,
natural exponent inertia weight, oscillating inertia weight, and linear inertia coefficient.
Instead of trying to adjust inertia weight based on algorithmic state, many solutions just
ignore it. The inertia weight of these techniques is often only varied by the number of
iterations that have occurred. Fuzzy systems were used to nonlinearly alter the inertia weight
during optimization in an extension of this work [141]. Nonlinear time-shifting inertia weights

41
浙江大学硕士学位论文

and constant acceleration coefficients were implemented in the LHNPSO PSO approach by
introducing particles with minimal disparity initializations [142].

3.5.2 Social and Cognitive Parameters

PSO includes two components, i.e., social experience and cognitive experience. A particle
reflects an individual solution in a problem space where there may be more than one possible
solution and the optimal solution to the problem is desired. The particles learn from two
factors: one is their own learning, known as cognitive experience, and the other is the
collective knowledge of the entire swarm, known as social experience. The personal best
value represents learning ability, whereas the global best value represents learning
experiences. The solution is the particle's best solution in its whole history. The number
represents the swarm's best-ever position. The particle is guided by the swarm using the
parameter. The velocity of particles to their next place is calculated by combining cognitive
and social experience [143].

The authors of [146] analyzed the effect of different time-varying inertia weight updating
techniques on the performance of Binary PSO when faced with the feature selection problem.
[147] These factors include learning factors that decreased linearly over time and learning
factors that were animatedly attuned and established based on the evolutionary conditions of
particles. The learning factors that were dynamically adjusted based on the number and
degree of persistent declines in fitness values in the swarm [148].

3.5.3 Population Initialization

The initialization of the particles is important to PSO's performance. If the initialization is


poor, the process may explore in undesirable locations, making it hard to find the best
solution. The initiation of the swarms has a big impact on PSO's performance [149]. In this
part, we'll look at and investigate the various PSO initiation options.

The PSO method employs a random start-up population. However, intelligent initialization
approaches are utilized to assure an equitable distribution of starting population members,
such as the nonlinear simplex method [150]. The PSO begins with a swarm of randomly
generated solutions. Each particle begins in a random place in the search space, which
symbolizes a solution. Another approach is to employ the uniform technique, which spreads
particles evenly over the search region [151]. When the population of the PSO algorithm is

42
浙江大学硕士学位论文

utilized as the starting population for the GA algorithm, as the author pointed out in [152],
both strategies may give superior results.

To initialize the particles, the authors of [121] chose several low-discrepancy sequences. To
start the swarm, they employed the Haltom, Sabol, and Faure sequences. They discover that
PSO with SOLO initialization outperforms all other approaches. They put their proposed
versions to the test with six typical test functions [153]. Nonparametric particle swarm
optimization (NP-PSO) was also adopted by the authors in [154] to increase local and global
searches without modifying the approach parameters. To improve search capacity, local and
global topologies are combined with quadratic interpolation methods.

3.6 Topological Configuration

Because population diversity has a direct influence on the PSO algorithm's effectiveness,
numerous researchers have suggested alternate population topological configurations. As a
consequence, researchers are working on developing new topologies to boost the PSO
algorithm's performance.

If topology is examined in this manner, it must have something to do with the concept of a
neighborhood. When it comes to neighborhoods, they might be either fixed or flexible. There
are two ways to find out where you live: The first is based on the particle's flag (or index),
which has no influence on distance, whereas the second is based on the topological distance
between the particles. Many different forms of static neighborhood structures and the impact
they have on the PSO algorithm's performance were studied using the static topological
structure, with the conclusion that the star, ring, and von Neumann topologies are the most
adaptive.

On more difficult problems, PSO with a smaller community may perform better than PSO
with a larger neighborhood. Additionally, dynamic topology is a small part of the research.
An Adaptive Time-Varying Topology Connectivity module and a new learning approach are
included in said work. Based on the performance from every particle in a population, a
dynamic tree hierarchy was employed to establish a neighborhood structure in Hierarchical
PSO [155-158].

43
浙江大学硕士学位论文

3.7 Artistic status of the PSO

This section discusses the various PSO applications. Apparently, rather than using the
gradient of the problem to be optimized, PSO solves problems through social interaction. This
means that, unlike traditional optimization approaches, a PSB [159] does not require the
problem to be differential. The population is initially launched, then the fitness value of each
particle is calculated, P_best and G_best are updated in the third step, and the speed and
location of all particles are adjusted in the fourth step. The second, third, and fourth phases are
repeated until a stop requirement [161–162] is met.

PSO has several advantages over other swarm intelligence procedures: it is simple to build,
has a limited number of parameters, and excels in global research. PSO has become one of the
most widely used and competent optimization techniques, and it is currently recognized as a
powerful stochastic approach based on swarm size and capacity. Because of its ability to scan
the entire space for large-scale problems, PSO can be used to manage unbalanced situations
that persist and change over time [163–165].

To ensure the stability and robustness of PSOs, the three parameters (c_1, c_2, and W) must
be correctly chosen. As a result, incorrect parameter values can be corrected, preventing the
algorithm from prematurely converging in the search planet to the local optimal area. It has a
propensity to achieve rapid and premature convergence in regular optimal facts, yet it has a
slow convergence in a complex region of research [166–167].

The study used the constriction factor-based particle swarm optimization (CFPSO) method to
assess minimal zone form flaws such as circularity, straightness, cylindricity, and flatness.
The addition of the constriction factor helps speed up the CFPSO's convergence property. A
simple minimal zone objective function is theoretically developed for each form issue and
then optimized using the presented idea [168].

describes a PSO with moving particles (MP-PSO), in which certain particles can move over a
scale-free network while also varying the cooperation form along the search space. MP-PSO
demonstrates greater flexibility and diversity. [169]

The particle arrangement may vary adaptively in order to balance exploration and exploitation
towards an outsized level. The author suggested a particle swarm optimization technique to
enhance the geometrical correctness of additive manufacturing mechanisms by decreasing
geometrical dimensioning and acceptance error. In terms of model factors as design elements,

44
浙江大学硕士学位论文

a regression approach is used to create a mathematical model for inaccuracy (circularity and
flatness). Minimizing circularity and flatness is presented as a challenge of multivariable
optimization that is solved using the PSO technique, which enhances the geometrical accuracy
of the ABS component for the optimal search of the AM processing parameter values [170].

A time-dependent inertia weight method is used in the decreasing inertia weight particle
swarm optimization technique. According to this idea, the inertia weight reduces in a
nonlinear manner over time. Another factor called convergence is included in the guidelines
to increase the speed of convergence and prevent particles from leaving the searching space;
this notion is known as constriction expansion, and it is defined as:

t t t t t t
V id =K . [V id +c 1 . r 1 .(P best id − X id )+ c2 . r 2 .(Gbest id −X id )] (3.13)

In addition, a standard electromagnetic test problem known as "TEAM problem 22" is used to
validate the stability and output of various optimization approaches for inverse problems in
electromagnetic design optimizations. TEAM problem 22 [174–176] is the optimal design of
a superconducting magnetic energy storage device to store considerable energy in the
magnetic field utilizing coil planning. [172] [173]

The authors of [177] use multi-level Gaussian mutations with various standard deviations to
enhance exploring ability in the viable zone, ensuring convergence speed and avoiding
premature convergence. The Euclidean distance approach with a dynamic inertia weight for
every particle has been published in [178] to minimize premature convergence. The author of
[179] offers a novel modified particle swarm optimization approach for assessing geometric
parameters that describe flat surface shape and function. Flat surfaces have four geometric
features that may be broken down into four parts: straightness, flatness, perpendicularity, and
parallelism. A non-linear minimum zone goal function is analytically built for each flat
surface geometric feature.

To solve the issue of diversity damage and to prevent early convergence, the Adaptive PSO
employs an adaptive approach based on inertia weight. Whenever the discrete optimal aptness
rate of a randomly selected particle from the swarm is compared to the current particle, the
larger one is used to determine the particle's velocity in order to better identify particles
plunging into the local optimal solution [180].

Using a hybrid PSO with a variable neighborhood search optimization technique that swiftly
converges to global minima without ever being trapped in local optima is described [181].

45
浙江大学硕士学位论文

The aforementioned approach enhances localization precision since it combines the basic
traits and real abilities of PSO with variable neighborhood searches. Another distinguishing
aspect of the MPSO technique is that it uses an altered search equation to produce new swarm
positions and fitness solutions. This study mimics the effect of an object's shape on the
accuracy of scanned data in the context of contactless laser scanning [182].

The averaged value of c 1 and c 2 in (3.15) is the new parameter's value:

V ki +1=W k V ki +c 1 .r 1i . ( Pkbest−i− X ki ) + c2 . r 2i . ( Gkbest −i −X ki ) + c3 . r 3 i .( Ekbest − X ki ) (3.14)

The Improvement Factor (IF) is the last term in (3.14). This Factor can assist the velocity in
determining the placements of subsequent particles. As a result, GPSO has outperformed
conventional PSOs, particularly when dealing with high-dimensional problems. For the
present generation, the IF is:

IF=c3 . r 3 i . ( Gmbest −X 2−X ki ) (3.15)

where c 3= [ ]
c 2 +c 2
2
.

RMPSO with the presentation of a gene regulatory network is described in [185]. Whenever
the personal best particle approaches the global best, it’s been documented in several studies
that the current personal best particles are kept while the newly obtained global best particle is
ignored. Moreover, the current Pbest and Gbest having equivalent 0fitness solution may be of
diverse arrangements. As a result, a repository of solutions of the same optimal solution as a
current Pbest might be a repository matching to the current Gbest value. Pbest and Gbest are two rep rep

repositories established by the RMPSO concept for keeping the solutions matching to Pbest
and Gbest , correspondingly. They updated the repository in two ways: "initially, update the
particle's location, then the value will be anticipated and checked to Pbest and Gbest ."Secondly,
following modifying the particle's location by putting on apiece mutation mechanism to the
current solution, the fitness value is computed and matched to Pbest and Gbest .

Clear the existing Pbest and add the new one to Gbest if the current Pbest is worse in fitness than
rep

that of the newer one. If the new fitness value is the same as Gbest , the repository must be
expanded and the new value added to Gbest . The repository recognises unique outcomes with
rep

the same fitness value, and the matching solutions are discarded. Pbest and Gbest would be

46
浙江大学硕士学位论文

chosen at random from Pbest and Gbest


rep rep
from Gbest , respectively. They used a mutation
rep

mechanism in the results of the research, as shown in Figure 4.1.

Figure 3.2. Five Successive mutations process

The addition of a swarm leader to the traditional PSO was first presented as ELPSO in [186].
The author introduced some novel mutation strategies with five staged consecutive mutations
to avoid the stated problem of premature convergence in previous PSOs, it is referred to as
convergence of Pbest in respect to Gbest .

In the study, the swarm leader is highly significant, and five consecutive mutation techniques
are introduced and examined on each iteration. The swarm's existing Gbest has been updated
with a new one if the new Gbest has a greater value, therefore the leader has been improved
and is now attempting to pull the particles in the direction of an auspicious area.

In order to have fewer exploratory skills and abilities, they used several mutation techniques,
including Gaussian, Opposition, Cauchy, and DE-built, to discover the leader with the best
objective function value, and the procedure was repeated many times in the hopes of finding a
superior leader, as presented in Figure 4.1.

47
浙江大学硕士学位论文

As an extension of the ELPSO, a novel notion termed modified particle swarm optimization
with effective guides (MPSOEG) [187] is presented to handle real-world optimization issues.
In the OGC module, a new global best particle is produced. To execute the suggested idea, an
optimum guide creation module has been implemented. The novel mechanism's major
proposal is to strike a fair balance between particle exploitation and exploration searches
while avoiding the processing effort associated with classic PSOs.

The authors proposed a strategy for controlling population diversity and improving PSO
performance by determining inertia weight based on Euclidean distance [188]. An upgraded
form of PSO was published in [189], which aimed to address the shortcomings of classic
PSOs in terms of PV parameter estimates. A sine-wave chaotic inertia weight mechanism is
first employed in this study to increase PSO performance and provide a suitable balance
among local and global queries. In order to get the best solution, the acceleration coefficients
are steered using a tangent chaotic approach.

The authors in [190] describe an improved multi-strategy particle swarm optimization


(IMPSO) method. It recommends that multi-strategy evolution techniques with a nonlinear
declining inertia weight be used to improve the global optimizing efficiency of particle
swarms by improving the structure and parameters for better mapping the extremely nonlinear
features of railway traction braking. An adaptive inertia weight factor (AIWF) is included in
the PSO velocity update equation. The major characteristic is that, unlike in a standard PSO,
in which the inertia weight is kept constant during the optimization, the weights are flexibly
constructed based on the particle's movement rate in order to obtain the best solution [191].

An adaptive mutation strategy is described using the extended non-uniform mutation operator,
in which adaptive mutation is used to help the trapped particles and extract them from local
optima [192]. The hybridizing inertia weight modification tactic, based on new particle
diversity and an adaptive mutation strategy, has been used to escape local algorithm
convergence in complex networks [193]. In [194], the authors applied different mutation
operators on particles in the instruction to boost the search ability of particles and avoid
stagnating.

The author proposes a novel idea of using an adaptive mutation-selection strategy to conduct
local pursuits of the global optimal particle in the up-to-date population, which could help to
improve the exploratory potential of the search domain and speed up the convergence speed
of the candidates. The work's aim is to find the best solution with a combination of stochastic

48
浙江大学硕士学位论文

methods and PSO with an adaptive Cauchy mutation method to design the new algorithm.
[196] [195]

To overcome the two main shortcomings of PSO, the author introduces a multiple-scale self-
adaptive cooperative mutation strategy-based particle swarm optimization method (MSCPSO)
in [177]. In the proposed strategy, they apply multi-scale Gaussian mutations with different
standard deviations to increase the capacity of sufficiently scanning the whole solution space.
The approach can be mathematically represented as:

(3.16)

where

if

then

(3.17)

The authors presented a unique approach to learning parameters in [197] to address the
complex engineering problem. The two learning variables are dynamically updated in order to
effect the particles fleeing from a local optimum and converging to the global optimal
solution, according to this theory. [198] investigates the use of Cauchy mutations and
Gaussian mutations in the modified PSO. The main goal is to get higher convergence and the
best possible results while solving diverse real-world situations. The PSO is a foundation in
the field of swarm intelligence. To achieve greater convergence, the suggested PSO employed
a higher weight factor than the conventional PSO.

In [199], an example-based learning PSO has been reported to improve swarm and
convergence speed diversity. According to the ELPSO idea, many global best particles are set
as examples to participate in the velocity update equation, selecting from the current best
candidates instead of the particle.

49
浙江大学硕士学位论文

The proposed work mathematically is shown as:

k k k
V i =WV i +c 1 rand 1i ¿ (3.18)

In [200], the exact particles location and position were described and explained for the
purpose of adjusting the balance for exploration and exploitation in the search process and is
mathematically expressed as:

k+1 k r k
X i =(1−β (t)¿ P¿¿ best + β (t)¿ G¿¿ best +α (t) Ri )
(3.19)

In [201], an advanced particle swarm optimization algorithm (APSO) approach is presented.


The algorithm uses an improved velocity to modify the equation to ensure that the particles
reach the best solution more quickly than with a traditional PSO. In [202], PSO with a joint
local and global expanding neighborhood topology (PSO-LGENT) is proposed that employs a
novel expanding neighborhood topology.

In [203], a local search strategy was developed where every candidate tries to reach a better
position during the search process and then tries to get the best of the whole swarm.

In [204], the genetic algorithm (GA) is used to amend the decision vectors using genetic
operators, while the PSO is used to boost vector position. In [205], the PSO algorithm is
paired with the sine-cosine algorithm (SCA) and the levied flight distribution. According to
the SCA algorithm, the updating solution is based on the sine and cosine functions, while levy
flight is a random walk that uses the levy distribution to produce search steps and then uses
big spikes to search the exploration space more effectively. A new hybrid algorithm is
proposed that combines the exploitation capabilities of the PSO with the integration of the
exploration capabilities of the Grey Wolf Optimizer (GWO). On the basis of the idea, it
combines two methods by substituting a particle from the PSO with a low probability for a
partially better particle from the GWO [206]. [207] proposes a modular taxonomy and several
categorization algorithms for distinguishing and analyzing various hybridization techniques.
The taxonomy is used to categorize a wide range of hybrids, including PSO and DE hybrids,
as well as a number of other well-known hybrids. This approach may be used to discover new
hybridization methods as well as to develop hybrid optimizers.

For many years, optimization has been a lively area of research, leading to the development of
numerous optimal methods in numerous disciplines. In both the academic and technical
disciplines, stochastic search techniques have been heavily used in recent years to find the

50
浙江大学硕士学位论文

overall best solution to complicated design challenges. Artificial bee colonies, ant colonies,
genetic algorithms, and other novel optimal algorithms have recently been employed to solve
complicated multimodal and sophisticated optimization problems. Particle swarm
optimization (PSO) is viewed from this perspective as a global optimizer for locating the best
answer to challenging, high-dimensional challenges [208].

One of the newest and simplest algorithms is the PSO. In 1995, Kennedy created the
population-based stochastic search algorithm (PSO) for the first time. Kennedy drew
inspiration from how animals and birds scavenge for food in the environment. Every particle
moves in a different way toward the destination as a result of communication between
particles as they move stochastically and independently in that direction. As a result, a huge
search area is looked for, which offers the best chances of discovering the overall perfect
solution. One of its main benefits is that a PSO is simpler to create than other evolutionary
algorithms because there is no crossover, decoding, or encoding of a genetic algorithm in a
PSO. Due to its few parameters, PSO's computational time is less expensive than that of other
optimization algorithms. Both in terms of theory and numerical application, PSO is
straightforward. PSO has been used in numerous engineering domains because of its
reliability, stability, and simplicity, including electric motors [209]– [210], automatic systems,
image processing [211], and robot technology [212]. There isn't a single algorithm for
optimization that can solve all optimal problems. Moreover, the primary flaw in a basic PSO
is the early convergence to a local optimum when tackling high-dimensional, challenging
situations in the actual world. Several adjustments are suggested to address this issue [213].

3.8 Conclusion

The conventional PSO algorithm suffers from issues such as premature convergence,
imbalance between exploration and exploitation, and lack of diversity in the later stages of
optimization due to static parameter values. This chapter aims to address these problems by
proposing enhancements to the PSO algorithm. The suggested approach involves dynamically
and adaptively modifying the fundamental parameters of PSO. In addition, a novel mutation
mechanism is introduced to generate the best particles. Along with the updates in the PSO
algorithm, control and learning settings are also incorporated.

To evaluate the performance of the proposed enhanced PSOs, they are compared with other
variations of the PSO algorithm. Experimental results demonstrate that our upgraded PSOs

51
浙江大学硕士学位论文

exhibit superior performance in terms of overall robustness compared to other approaches.


These improvements effectively tackle the issues of premature convergence, exploration-
exploitation balance, and diversity in the optimization process.

52
浙江大学硕士学位论文

Chapter 4

Improved Dynamic PSO for Problems in Electromagnetic Devices

4.1 Overview

Several PSO modifications have been designed to improve the efficiency of PSOs, as seen by
the preceding two chapters. However, because PSO is a recent concept in evolutionary
algorithms, it still faces a number of serious challenges. Moreover, no global optimizer is
equally successful in tackling all optimization issues, as per the no free lunch hypothesis. In
this regard, it is vital to retain the diversity of evolutionary algorithms for inverse issue
analysis. This thesis proposes three innovative models to increase the performance of the
particle swarm optimization technique based on these concerns.

This work introduces some new approaches for establishing the three fundamental parameters
in order to preserve the diversity of the particles and maintain the equilibrium between
exploration and exploitation searches in order to control premature convergence and slow
convergence pace. The proposed modified PSO preserves equilibrium between local and
global searches and maintains particle motion throughout the search process, assisting
particles in eluding local optima [213]– [218]. The suggested approach facilitates the
algorithm's progress during the process toward the global optimum space.

The first model describes the new computational method known as SPSO (Particle Swarm
Optimization with a Smart Particle), which is used to increase the performance of PSO. The
second model includes an improved PSO, a novel approach for the inertia weight, and a new
strategy for selecting a different mutation operator depending on the selection ratio. The third
model added a new dynamic inertia weight, a new categorization of the entire population
swarm into five sub-swarms, and a new particle. The presented techniques (three models) will
trade off between local and global searches, and thus between increasing population diversity
and avoiding particle trapping in local minima. As a result, it will boost PSO's performance
and strengthen PSO's potential to search for objects all over the world.

The presented PSO models have been validated using a superconducting magnetic energy
storage device and well-known mathematical test functions. The numerical findings reveal
that the suggested PSOs outperform existing well-designed stochastic algorithms in terms of
efficiency and usefulness.

53
浙江大学硕士学位论文

In summary, despite significant development and utilization of many PSO versions, existing
PSOs are quite incapable of handling all engineering design problems. In this work, we
present new methodologies for controlling premature convergence, making the PSO stronger,
and increasing its convergence speed.

4.2 A Modified Particle Swarm Optimization with a Smart Particle

As explained previously, several PSO variants have been reported in the literature in recent
decades. The majority of them have high searching abilities, but due to the static placements
of their personal best and global best particles, they become locked in local minima during the
early search generation. Because the basic parameters in classical PSO remain constant, the
approach converges on the local optimum region or space. As a consequence, the classic PSO
approach has a premature convergence problem in complex, dynamic, and multimodal
scenarios. Particle swarm optimization using smart particles, a newly designed algorithm, is
thus introduced to improve the efficiency of PSO. The offered approach's primary idea is to
strike a compromise between the particles' global and local search functions while enhancing
individual variability at the end of the iteration process. In the improved approach offered, we
devise a new technique known as the convergence factor (CF).

Memorization, comparison, and leader declaration are the three processes in the convergence
factor's training of the particle to have smart behavior.

1) Eidetic memory allows particles to remember their most recent position, which is then
stored in the memory array for use by the entire swarm.

2) The current particle position is compared to the prior particle position in the comparison
step.

The mathematical expression for the comparing process is:

X i ={X i if X i is better than X i−1 X i−1 otherwise (4.1)

If the current location toward that optimal solution is superior (has a smaller fitness value or
converges faster), it replaces the prior one; otherwise, the old value is retained.

3) Leader declaration: a particle is declared (smart) a leader built on the fittest values.
Particles are stored in the memory collection for all generations, and the acceptable particle is

54
浙江大学硕士学位论文

chosen to guide the swarm to the global minima. Figure 4.1 illustrates the flowchart of the
developed SPSO algorithm.

Our SPSO's major objective is to guarantee that the best particle (smart) survives and acts as a
leader. The CF improved the PSO's outcomes throughout the SPSO product development
lifecycle, and as a response, the entire swarm would eventually converge on an optimal
solution. According to this concept, the smart particle functions as a superior leader, capable
of improving the search method in the same manner as a skilled leader may improve an
organization's efficiency. Furthermore, previous algorithms failed to provide appropriate
results in massive populations; they mostly used small population sizes, but the new
techniques have produced decent results in massive populations as well.

55
浙江大学硕士学位论文

Figure 4.1. Flow-chart of SPSO algorithm

Especially compared to earlier strategies, particles using the strategy will reach the solutions
sooner, especially in the early generations.

The suggested procedure is simple to develop and takes a lesser amount of time than other
familiar methods. SPSO can offer more accurate findings throughout all 100 runs, as
illustrated in the next part. According to the objective function values, the values obtained by
the proposed technique are better than those obtained by current PSO variants.

4.3 The proposed modified PSO

Exploitation is the capacity of a best-practice algorithm to enhance the best solution it has so
far identified by examining a limited region of the solution. While in PSO, all particles
converge on the same peak of the objective function and remain there. On the other side, the
exploration characteristics of an algorithm define its ability to leave the present peak and
search for a better answer. In light of the aforementioned definition, we will look into how
inertia weights affect PSO's capacity for exploration and exploitation. The inertia weight
determines the rate at which a particle's previous velocity contributes to its current velocity.
The inertia weight value greater than 1.0 causes the particle to speed up to the maximum
velocity, and the inertia weight value less than 1.0 causes the particle to reduce to zero
velocity if the effects of the personal best and global best positions are ignored in Equation (1)
(i.e., c1 = c2 = 0) [47]. It is challenging to directly assess the effect of the inertia weight when
c1 and c2 are not zero. This reduces the complex challenge of function optimization with PSO
to a unimodal function optimization involving two particles in two distinct states. The two
particles are in the first state, as seen in Figure 1(a), distant from the optimum and traveling
toward one another. The fitter particle moves in the same direction toward the ideal as in the
scenario where (c1 = c2 = 0). Hence, the particle will travel quicker toward the optimal when
an inertia weight has greater values, whereas slower inertia weight values will prevent the
particle from reaching the optimum. The other particle's personal best location is the same as
that particle's present position. The particle's velocity is then calculated using its prior velocity
and the distance to its absolute optimal position. Higher values of inertia weight are also
desired in this scenario, similar to the particle with better fit. It should be noted that
throughout an optimization process, the role of a particle may change if a second particle
achieves a higher fitness than the fitter one. The aforementioned discussion makes it simple to

56
浙江大学硕士学位论文

explain the behavior of swarms that comprise more than two particles. Each of the other
particles behaves similarly to the less-suited one, and the global best particle always has the
same state as the better-suited particle.

In the second condition, as depicted in Figure 1(b), the absorbing spots, personal best, and
global best locations are positioned close to the optimum and the particle outcomes in a
limited area that should be utilized to improve the best solution, or improvement zone. The
improvement region is in close proximity to the particles traveling at high speeds around the
absorption sites, with little chance of them dropping into it. The particle's velocity should be
reduced by ignoring the impact of its previous velocity in order to prevent this movement
around the improvement region. Little inertia weight values can be used to achieve this.
Figure 1 illustrates how particles may alternate between two different states while the
optimization process is underway. It is also conceivable and quite common to have various
particles in multiple states at the same time. Depending on the inertia weighting approach the
particle employs, the number of generations needed to keep the particle in the same condition
varies. In this perspective, the suggested technique uses a tangent random function, and the
learning parameters are updated at random. As previously stated, the particles require smaller
inertia weight values for exploitation searches compared to larger inertia weight values for
global searches. Because the inertia weight is constant, the particles cannot attain the global
optimal solution or quickly converge to the local optima when the basic PSO works with
higher-dimensional issues (since the static inertia weight does not have the capability to
provide a proper balance between the exploration search and the exploitation search). On the
other hand, in the search process, particle diversity is greatest at the start and decreases as the
process of evolution progresses. Because the particles lack the ability to explore, the trade-off
between global and local search in this scenario is occasionally disordered, and PSO will not
be able to find the global optimal solution. Consequently, all particles eventually arrive at the
local optima, which was previously referred to as the premature optima. In order to resolve
this issue, dynamic parameters must be added to the traditional PSO. From this vantage point,
a new variation for the inertia weight and learning factors is proposed that will control the
diversity of the particles and allow them to explore more areas during the search process.
Additionally, the proposed dynamic inertia weight will create a good equilibrium between
local and global searches. The first thorough review of various inertia weight strategies was
created by Ahmad Nickabadi et al. They also suggested a novel approach to the inertia

57
浙江大学硕士学位论文

weight. He claims that the new adaptive inertia weight explores the particles in the search
space by using the population's success rate as a feedback parameter.

4.4 Dynamic inertia weight

The inertia weight is an important metric in PSO. Throughout the evolution process, low
inertia weight values are required for exploitation search capabilities, but higher inertia weight
values are required for global search capabilities. When dealing with higher-dimensional
problems and taking into account static inertia weight, the particles are unable to reach the
global optimal solution and quickly converge to the local optima because static inertia weight
is unable to provide an appropriate balance between the exploration search and the
exploitation search.

W =exp(−k /kmax) (4.2)

4.5 Learning parameters

The stochastic acceleration has a significant impact on the velocity of the particles. The
cognitive and social components must be properly adjusted throughout the optimization
process. According to Kennedy and Eberhart's theories, c1 particles can help particles escape
the local optima, while c2 particles help to find global minima.

c 1=rand ()+((kmax/k )/kmax) (4.3)

c 1=rand ()+(k /kmax ) (4.4)

4.6 Convergence performances of different optimal algorithms

To evaluate the performance of the proposed PSO, we compared the convergence graphs of
different optimal algorithms. The convergence characteristics of different optimal algorithms
are given in Figs.4.2~4.4. The following are some typical observations. Considering the
convergence plot of the test function f1, we observe that our proposed modified PSO
converges to the optimal solution after 50 generations while all the optimal algorithms and

58
浙江大学硕士学位论文

inertia weight strategies are converge to local optima during the evolution process. From the
convergence plot of the test function f2.

Similarly if we make the comparison of different optimal algorithms for the test function f3,
we see that our proposed modified PSO finds the optimal solution at early stages of the
optimization process while all the inertia weight strategies and IPSO, GPSO and BPSO are
converge to local optimal point. Considering the convergence plot of test function f5 our new
proposed PSO converges to the optimal solution at very initial stages of the search process to
the global optimal. optimum space. From the above observations it is obvious that the
performance of the new approach of PSO is the best algorithm for high dimensional
optimization problems as compared to the other optimal algorithms and also shows
outstanding performance on the well-known, developed inertia weight strategies.

Table 1 Performance comparison of different optimal algorithms for 10-dimension


optimization problems.
ALGORITHMS f1 f2 f3 f4 f5

GPSO Mean 1.826 14,1067 20,853 100,19 1,0538


Min 0.125 6.138 10.87 9.04× 108 1.52× 103

Max 91.75 61.34 32.65 6.07× 1013 5.341 × 109

Std 91.75 61.34 32.65 6.07× 1013 5.341 × 104

AMPSO Mean 1.826 14,1067 20,853 100,19 1,0538

Min 0.125 6.138 10.87 9.04× 108 1.52× 103

Max 45.75 61.34 32.65 6.07× 1013 5.341 × 109

Std 91.75 61.34 32.65 3.07× 1013 5.341 × 109


MPSO Mean 1.826 14,1067 20,853 100,19 1,0538

Min 0.125 6.138 10.87 9.04× 108 1.52× 104


Max 91.75 61.34 32.65 8.07× 1013 5.341 × 109
Std 78.75 61.34 32.65 6.07× 1013 5.341 × 109
MPSOED Mean 1.826 14,1067 20,853 100,19 1,0538
Min 0.125 6.138 10.87 9.04× 108 1.52× 103
Max 90.75 61.34 32.65 5.07× 1013 5.341 × 109

59
浙江大学硕士学位论文

Std 91.75 61.34 32.65 6.07× 1013 5.341 × 107


GCMPSO Mean 1.826 14,1067 20,853 100,19 1,0538
Min 0.125 6.138 10.87 9.04× 108 1.52× 103
Max 91.75 60.34 32.65 507× 1013 5.341 × 108
Std 91.75 61.34 32.65 2.07× 1013 5.341 × 107
DPSO Mean 1.826 14,1067 20,853 100,19 1,0538
Min 0.125 6.138 10.87 9.04× 108 1.52× 103
Max 81.75 61.34 32.65 7.07× 1013 5.341 × 109
Std 67.75 61.34 32.65 6.07× 1013 5.341 × 104

4.7 Results after testing all algorithms

In order to properly compare the different approaches to evaluating the factual analysis of
these optimization functions, we employed identical parameter settings for all algorithms in
the computational testing. There can be a maximum of 1000 iterations. The convex multi
objective test functions De Jong, HappyCat, Step, Bent Cigar, and Alpine1 have several local
minima and a single global optimal solution. Table 1 data show that our novel strategy
outperforms existing approaches like GPSO, AMPSO, MPSO, MPSOED, and GCMPSO. The
outcomes demonstrate that our suggested strategy outperformed alternatives.

Table 2. Best Objective Function Values for optimization Benchmark Problems


Function GPSO AMPSO MPSOED GCMPSO MPSO DPSO
De Jong’s -1.2 -2.9 -12.8 -8.8 -6.7 -91.0984
Happy cat 0.5 1.5 1.9 1.4 2 -0.17177
Step -5.3 -2.3 -6.9 -13.1516 -8.7 -61.775
Bent cigar -7.8 -7.9 -7.68 -7.23305 -12.8 -34.9803
Alpine 1 -3.8 -1.5 -7 -5 -3.8 -63.0051

60
浙江大学硕士学位论文

Figure 4.2. Convergence curve for benchmark problems for De-jong algorithm

Figure 4.3. Convergence curve for benchmark problems for HappyCat algorithm

61
浙江大学硕士学位论文

Figure 4.4 Convergence curve for benchmark problems for Step algorithm

Figure 4.4 Convergence curve for benchmark problems for Bent-Cigar algorithm

Figure 4.4 Convergence curve for benchmark problems for Alpine 1 algorithm

Table 3. Results Comparison of IPSO with other variants on TEAM Workshop Problem 22
Algorithm Best objective function value
GPSO 0.1287
AMPSO 0.1136
MPSO 0.1356
MPSOED 0.1123
GCMPSO 0.1210
DPSO 0.097

62
浙江大学硕士学位论文

4.8 TEAM WORKSHOP Problem 22

4.8.1 Description

It's also significant that nonlinear relationships may be used to describe most problems in the
realm of electromagnetic design optimization. The TEAM problem 22 is a helpful benchmark
problem for seeing how different optimization approaches perform and how well they
function.

The three models presented here are used to solve a standard inverse problem in the low
frequency scope, the TEAM problem 22, of a superconducting magnetic energy storage
(SMES) system [176].

Figure 4.5: SMES Configuration


This is a continuous, three-parameter benchmark problem with two concentric coils, like the
inner primary solenoid and the outside shielding solenoid, to limit the stray field. The current
in the two coils is flowing in opposing directions. This SMES system must be optimized on
the basis of size and current capacity, with a focus on minimizing the magnetic stray field
while maintaining the stored energy value [235-236].

The SMES design's primary purpose is to achieve the necessary stored magnetic energy with
minimal stray fields. The following are the primary goals and constraints:

1. The magnetic energy stored in the system should be 180MJ;


2. The magnetic field produced inside the solenoids must not violate certain physical
conditions in order to ensure superconductivity of the coils:
3. The mean stray field at 22 measurement points along lines A and B at a distance of 10
meters should be as small as possible.

63
浙江大学硕士学位论文

SMES has various applications in biomedical engineering; nuclear magnetic resonance


spectroscopy requires a homogeneous magnetic field, and magnetic resonance imaging
requires a linear profile of the field. Furthermore, field uniformity assists in the uniform
dispersion of heat generated in the nanoparticle fluid that was previously injected into the
target location, such as a tumor mass being treated, in magneto-fluid hyperthermia (MFH). As
a result, the idea underlying this benchmark problem has been motivated by large practical
applications.

4.8.2 Application

We chose "TEAM workshop problem 22 (SMES) another case study an engineering


electromagnetic device" to test the performance of our proposed approach. Computational
electromagnetics provides the best design for the SMES device. The problem's design process
considers three factors related to the development of SMES. This SMES device, however,
essentially integrates two objective functions in order to integrate magnetically stored energy
within a few coils with the values 𝑊𝑚, 𝑊𝑒𝑟𝑓 = 180, M Joule, 𝑁 = 22, and 𝐵𝑛𝑜𝑟𝑚 = 3 𝑚
Tesla.

The mathematical equation for the stray magnetic field is as follows:

subject to
(4.5)

(4.6)

The inner solenoid of the SMES devices are fixed; 𝑟1 = 2𝑚, 𝑑1 = 0.27𝑚 and ℎ1/2 = 0.8𝑚
where the dimension of external solenoid is optimized as 0.6 =< 𝑟2 <= 3.4𝑚 and 0.1 < ℎ2/2 <
0.4m.

4.9 Conclusion

The three fundamental PSO parameters have a significant role in the search process, as
previously updated research has shown. If these parameters are set incorrectly, the PSO will

64
浙江大学硕士学位论文

converge to the local optima. This research offered various dynamic strategies for updating
the three fundamental parameters to address this issue. The numerical studies show that the
updated PSO is more effective at addressing engineering design problems and is better suited
to handling complicated optimization challenges.

Chapter 5

CONCLUSION

5.1 Conclusion

This thesis emphases on the particle swarm optimization (PSO) method and its applications to
electromagnetic design problems. The main objective was to improved dynamic particle
swarm optimisation when applied to electromagnetic design problems. The new emerging
PSO based approach for engineering inverse problems will reduce the computational burden,
preserve the diversity of population, achieve faster convergence speed and high solution
accuracy. This objective is significantly achieved by developing a number of new methods
(four improved models) using PSO to inevitably develop one or more feature subsets, each
containing a small number of features. The first proposed model is called GPSO. In this
model a new mechanism for mutations is proposed. The mutation operator is combined with a
beta probability distribution functions. The overall finding of the proposed approach was the
design of a new position updating formula, the introduction of a new mutation strategy and a
dynamic control parameter to preserve a good balance between exploration and exploitation
searches. The second proposed model is also AMPSO. In this model, a new mutation

65
浙江大学硕士学位论文

mechanism is used with an improved factor with the goal of preserving the diversity of swarm
at the later stages of evolution. In addition, some strategy for parameter updating is proposed
to trade off between local and global searches. The third proposed model is called MPSO. In
this model, a new best particle is introduced and chosen by using the tournament selection
methodology to keep the particles diverse; this has the purpose to escape from local minima.
Also, a new formula for the control parameter is developed to further help the particles to
avoid from local minima. Moreover, the dynamic changes in the contraction expansion (CE)
parameter will bring a good balance between exploration and exploitation searches. The
fourth model, which is called MPSOED, introduces a novel selection methodology by
choosing a best particle among the population based on the fitness values of particles. The
fittest particle further take part in the search process to help the particles to escape from local
optima. Also, some mutation operator is applied on the global best particle to further explore
the global searching of the algorithm. In addition, some strategy for parameter is proposed in
the new improved method. It can be concluded that the outcomes (and efficiency) of our
improvements are justified by observing their performance when applied to well-known
benchmark functions and an electromagnetic design problem.

5.2 future work

The results of the proposed (models) approach are very satisfying and may represent an
important contribution to enhance the PSO performance for other electromagnetic problems.
As for future studies, it is important to investigate other optimal approaches in order to solve
electromagnetic design problems. Also, the proposed approach can be applied to other
electromagnetic optimization problems. e.g. the Loney’s solenoid problem (Barba et al.,
1995). Nevertheless, the PSO is still in its infancy phase and it is likely that the current work
(thesis) will stimulate the interests and efforts among researchers on this area.

66
浙江大学硕士学位论文

Lists of Publications

Akarawatou AAB, Shah Fahad, Shoaib Ahmed Khan, Yang Shiyou* and Shafiullah Khan, A
Multimodal Improved Dynamic Particle Swarm Optimization for Problems in
Electromagnetic Devices.

67
浙江大学硕士学位论文

References

[1] B. Hofmann. "Ill-posedness and regularization of inverse problemsa review on


mathematical methods" In the Inverse Problem. Symposium ad Memoriam H. v. Helmholtz,
H. Lubbig (Ed)., pages 45–66. Akademie-Verlag, Berlin; VCH, Weinheim, 2011.

[2] S. I. Kabanikhin, "Inverse and Ill-posed problems. Theory and applications" in ,


Germany: De Gruyter, 2011.

[3] Argoul, Pierre. "Overview of inverse problems" PhD diss., Modes, 2012.

[4] S. I. Kabanikhin, "Definitions and examples of inverse and ill-posed problems" J.


Inverse Ill-Posed Probl., vol. 16, no. 4, 2008, doi: 10.1515/JIIP.2008.019.

[5] F. Natterer, "The mathematics of computerized tomography". Philadelphia: Society for


Industrial and Applied Mathematics, 2001.

[6] A. Ravindran, K. M. Ragsdell, and G. V. Reklaitis, "Engineering Optimization:


Methods and Applications" Second Edition. 2007.

[7] M. Rudnicki and W. Slawomir, "Optimization and Inverse Problems in


Electromagnetism" Springer Science & Business Media, 2003.

68
浙江大学硕士学位论文

[8] P. Zhou, "Numerical Analysis of Electromagnetic Fields" Berlin, Heidelberg: Springer


Berlin / Heidelberg, 1993.

[9] N. V. Korovkin, V. L. Chechurin and M. Hayakawa, "Inverse Problem in Electric


Circuits and Electromagnetics" New York, NY, USA: Springer, 2007.

[10] E. Curtis and J. Morrow, "Inverse problems for electrical networks" Singapore: World
Scientific, 2000.

[11] P. Neittaanmäki, M. Rudnicki and A. Savini, "Inverse problems and optimal design in
electricity and magnetism" Oxford [England]: Clarendon Press, 1996.

[12] G. Korn and T. Korn, "Mathematical handbook for scientists and engineers" Mineola:
N.Y., 2000.

[13] Y. Huang, Y. Ru, Y. Shen, and Z. Zeng, “Characteristics and Applications of


Superconducting Magnetic Energy Storage,” in Journal of Physics: Conference Series, 2021,
vol. 2108, no. 1, doi: 10.1088/1742-6596/2108/1/012038.

[14] P. Mukherjee and V. V. Rao, “Superconducting magnetic energy storage for


stabilizing grid integrated with wind power generation systems,” J. Mod. Power Syst. Clean
Energy, vol. 7, no. 2, 2019, doi: 10.1007/s40565-018-0460-y.

[15] V. S. Vulusala G and S. Madichetty, “Application of superconducting magnetic


energy storage in electrical power and energy systems: a review,” International Journal of
Energy Research, vol. 42, no. 2. 2018, doi: 10.1002/er.3773.

[16] J. H. Kim, S. Y. Hahn, C. H. Im, J. K. Kim, H. K. Jung, and S. Y. Hahn, “Design of a


200-kJ HTS SMES system,” IEEE Trans. on Appl. Superconductivity, 2002, vol. 12, no. 1,
doi: 10.1109/TASC.2002.1018516.

[17] W. V. Hassenzahl and W. R. Meier, “A Comparison of Large-Scale Toroidal and


Solenoidal SMES Systems,” IEEE Trans. Magn., vol. 27, no. 2, 1991, doi:
10.1109/20.133683.

[18] H. J. Boenig and J. F. Hauer, “Commissioning tests of the bonneville power


administration 30 MJ superconducting magnetic energy storage unit,” IEEE Trans. Power
Appar. Syst., vol. PAS-104, no. 2, 1985, doi: 10.1109/TPAS.1985.319044.

69
浙江大学硕士学位论文

[19] S. Nomura et al., “Design considerations for force-balanced coil applied to SMES,”
IEEE Trans. on Appl. Superconductivity, 2001, vol. 11, no. 1 II, doi: 10.1109/77.920226.

[20] C. A. Borghi, "Design optimization of a microsuperconducting magnetic energy


storage system", IEEE Trans. Magnetics, vol. 35, pp. 4275-4284, Sept. 1999.

[21] B. Hofmann. "Ill-posedness and regularization of inverse problemsa review on


mathematical methods" In the Inverse Problem. Symposium ad Memoriam H. v. Helmholtz,
H. Lubbig (Ed)., pages 45–66. Akademie-Verlag, Berlin; VCH, Weinheim, 2011

[22] S. I. Kabanikhin, "Inverse and Ill-posed problems. Theory and applications" in ,


Germany: De Gruyter, 2011.

[23 ] Argoul, Pierre. "Overview of inverse problems" PhD diss., Modes, 2012.

[24] S. I. Kabanikhin, "Definitions and examples of inverse and ill-posed problems" J.


Inverse Ill-Posed Probl., vol. 16, no. 4, 2008, doi: 10.1515/JIIP.2008.019.

[25] F. Natterer, "The mathematics of computerized tomography". Philadelphia: Society for


Industrial and Applied Mathematics, 2001.

[26] A. Ravindran, K. M. Ragsdell, and G. V. Reklaitis, "Engineering Optimization:


Methods and Applications" Second Edition. 2007.

[27] N. V. Korovkin, V. L. Chechurin and M. Hayakawa, "Inverse Problem in Electric


Circuits and Electromagnetics" New York, NY, USA: Springer, 2007.

[28] P. Zhou, "Numerical Analysis of Electromagnetic Fields" Berlin, Heidelberg: Springer


Berlin / Heidelberg, 1993.

[29] M. Rudnicki and W. Slawomir, "Optimization and Inverse Problems in


Electromagnetism" Springer Science & Business Media, 2003.

[30] E. Curtis and J. Morrow, "Inverse problems for electrical networks" Singapore: World
Scientific, 2000.

[31] P. Neittaanmäki, M. Rudnicki and A. Savini, "Inverse problems and optimal design in
electricity and magnetism" Oxford [England]: Clarendon Press, 1996.

[32] G. Korn and T. Korn, "Mathematical handbook for scientists and engineers" Mineola:
N.Y., 2000.

70
浙江大学硕士学位论文

[33] Y. Huang, Y. Ru, Y. Shen, and Z. Zeng, “Characteristics and Applications of


Superconducting Magnetic Energy Storage,” in Journal of Physics: Conference Series, 2021,
vol. 2108, no. 1, doi: 10.1088/1742-6596/2108/1/012038.

[34] P. Mukherjee and V. V. Rao, “Superconducting magnetic energy storage for


stabilizing grid integrated with wind power generation systems,” J. Mod. Power Syst. Clean
Energy, vol. 7, no. 2, 2019, doi: 10.1007/s40565-018-0460-y.

[35] V. S. Vulusala G and S. Madichetty, “Application of superconducting magnetic


energy storage in electrical power and energy systems: a review,” International Journal of
Energy Research, vol. 42, no. 2. 2018, doi: 10.1002/er.3773.

[36] J. H. Kim, S. Y. Hahn, C. H. Im, J. K. Kim, H. K. Jung, and S. Y. Hahn, “Design of a


200-kJ HTS SMES system,” IEEE Trans. on Appl. Superconductivity, 2002, vol. 12, no. 1,
doi: 10.1109/TASC.2002.1018516.

[37] W. V. Hassenzahl and W. R. Meier, “A Comparison of Large-Scale Toroidal and


Solenoidal SMES Systems,” IEEE Trans. Magn., vol. 27, no. 2, 1991, doi:
10.1109/20.133683.

[38] H. J. Boenig and J. F. Hauer, “Commissioning tests of the bonneville power


administration 30 MJ superconducting magnetic energy storage unit,” IEEE Trans. Power
Appar. Syst., vol. PAS-104, no. 2, 1985, doi: 10.1109/TPAS.1985.319044.

[39] S. Nomura et al., “Design considerations for force-balanced coil applied to SMES,”
IEEE Trans. on Appl. Superconductivity, 2001, vol. 11, no. 1 II, doi: 10.1109/77.920226.

[40] C. A. Borghi, "Design optimization of a microsuperconducting magnetic energy


storage system", IEEE Trans. Magnetics, vol. 35, pp. 4275-4284, Sept. 1999.

[41] X.-S Yang, "Metaheuristic Optimization", Scholarpedia, vol. 6, no. 8, pp. 11472,
2011.

[42] J. A. Parejo, A. Ruiz-Cortés, S. Lozano and P. Fernandez, "Metaheuristic optimization


frameworks: A survey and benchmarking", Soft Comput., vol. 16, no. 3, pp. 527-561, 2012.

[43] P. Pedregal, "Introduction to Optimization" New York, NY: Springer, 2006.

[44] U. Diwekar, "Introduction to Applied Optimization" Cham: Springer International


Publishing AG, 2021.

71
浙江大学硕士学位论文

[45] D. Dipankar, and Z. Michalewicz, eds. "Evolutionary algorithms in engineering


applications" Springer Science & Business Media, 2013.

[46] O. Bozorg-Haddad, M. Solgi and H. Loáiciga, "Meta-heuristic and Evolutionary


Algorithms for Engineering Optimization" John Wiley & Sons, 2017.

[47] F. S. Lobato and V. Steffen, "Multi-Objective Optimization Problems: Concepts and


Self-Adaptive Parameters With Mathematical and Engineering Applications" Cham,
Switzerland:Springer, 2017.

[48] X. Lin, H. L. Zhen, Z. Li, Q. Zhang, and S. Kwong, “Pareto multi-task learning,” in
Advances in Neural Information Processing Systems, 2019, vol. 32.

[49] R. Horst and P. Pardalos, Handbook of Global Optimization. New York, NY:
Springer, 2013.

[50] F. D. Moura Neto and A. J. Da Silva Neto, An introduction to inverse problems with
applications, vol. 9783642325571. Springer Science & Business Media, 2012.

[51] T. Kowalczyk, T. Furukawa, S. Yoshimura, and G. Yagawa, “An extensible


evolutionary algorithm approach for inverse problems,” Inverse Problems in Engineering
Mechanics, 1998.

[52] F. W. Glover and G. A. Kochenberger, "Handbook of Metaheuristics" MA, Norwell:


Kluwer Academic, 2003.

[53] F. Glover and M. Laguna, "Tabu Search Kluwer Academic" pp. 382, 1997. Boston,
Texas.

[54] C. Blum and A. Roli, “Metaheuristics in Combinatorial Optimization: Overview and


Conceptual Comparison,” ACM Computing Surveys, vol. 35, no. 3. 2003, doi:
10.1145/937503.937505.

[55] X.-S. Yang and M. Karamanoglu, Swarm Intelligence and Bio-Inspired Computation
Theory and Applications. 2013.

[56] J. K. and R. Eberhart, “Particle swarm optimization,” in ICNN’95 - International


Conference on Neural Networks, 1995, pp. 1942–1948.

[57] M. Dorigo, "Optimization learning and natural algorithms" 1992. Ph. D. Thesis,
Politecnico di Milano.

72
浙江大学硕士学位论文

[58] R. Storn and K. Price, “Differential Evolution - A Simple and Efficient Heuristic for
Global Optimization over Continuous Spaces,” J. Glob. Optim., vol. 11, no. 4, 1997, doi:
10.1023/A:1008202821328.

[59] D. Karaboga and B. Basturk, “Artificial Bee Colony (ABC) optimization algorithm for
solving constrained optimization problems,” in Lecture Notes in Computer Science (including
subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2007,
vol. 4529 LNAI, doi: 10.1007/978-3-540-72950-1_77.

[60] K. N. Krishnanand and D. Ghose, “Detection of multiple source locations using a


glowworm metaphor with applications to collective robotics,” in Proceedings - 2005 IEEE
Swarm Intelligence Symposium, SIS 2005, 2005, vol. 2005, doi: 10.1109/SIS.2005.1501606.

[61] X.-S. Yang, “A New Metaheuristic Bat-Inspired Algorithm, In Nature inspired


cooperative strategies for optimization (NICSO 2010), pp. 65-74. Springer, Berlin,
Heidelberg, 2010.

[62] R. Brociek, A. Chmielowska, and D. Słota, “Comparison of the probabilistic ant


colony optimization algorithm and some iteration method in application for solving the
inverse problem on model with the caputo type fractional derivative,” Entropy, vol. 22, no. 5,
2020, doi: 10.3390/E22050555.

[63] G. Dhiman and V. Kumar, "Multi-objective spotted hyena optimizer: A Multi-


objective optimization algorithm for engineering problems", Knowledge-Based Systems, vol.
150, pp. 175-197, 2018.

[64] S. L. Ho and S. Yang, “An artificial bee colony algorithm for inverse problems,” Int.
J. Appl. Electromagn. Mech., vol. 31, no. 3, 2009, doi: 10.3233/JAE-2009-1056.

[65] E. G. Magacho, A. B. Jorge, and G. F. Gomes, “Inverse problem based multiobjective


sunflower optimization for structural health monitoring of three-dimensional trusses,” Evol.
Intell., 2021, doi: 10.1007/s12065-021-00652-4.

[66] G. Crevecoeur, P. Sergeant, L. Dupre, and R. Van De Walle, “A two-level genetic


algorithm for electromagnetic optimization,” IEEE Trans. Magn., vol. 46, no. 7, 2010, doi:
10.1109/TMAG.2010.2044186.

73
浙江大学硕士学位论文

[67] S. L. Ho and S. Yang, “The cross-entropy method and its application to inverse
problems,” in IEEE Transactions on Magnetics, 2010, vol. 46, no. 8, doi:
10.1109/TMAG.2010.2044380.

[68] S. An, S. Yang, S. L. Ho, T. Li, and W. Fu, “A modified tabu search method applied
to inverse problems,” in IEEE Trans. Magn., 2011, vol. 47, no. 5, doi:
10.1109/TMAG.2010.2072914.

[69] G. Dhiman and V. Kumar, “Seagull optimization algorithm: Theory and its
applications for large-scale industrial engineering problems,” Knowledge-Based Syst., vol.
165, 2019, doi: 10.1016/j.knosys.2018.11.024.

[70] L. Abualigah, A. Diabat, S. Mirjalili, M. Abd Elaziz, and A. H. Gandomi, “The


Arithmetic Optimization Algorithm,” Comput. Methods Appl. Mech. Eng., vol. 376, 2021,
doi: 10.1016/j.cma.2020.113609.

[71] S. Mirjalili, P. Jangir, and S. Saremi, “Multi-objective ant lion optimizer: a multi-
objective optimization algorithm for solving engineering problems,” Appl. Intell., vol. 46, no.
1, 2017, doi: 10.1007/s10489-016-0825-8.

[72] A. Faramarzi, M. Heidarinejad, B. Stephens, and S. Mirjalili, “Equilibrium optimizer:


A novel optimization algorithm,” Knowledge-Based Syst., vol. 191, 2020, doi:
10.1016/j.knosys.2019.105190.

[73] S. Mirjalili and A. Lewis, “The Whale Optimization Algorithm,” Adv. Eng. Softw.,
vol. 95, 2016, doi: 10.1016/j.advengsoft.2016.01.008.

[74] I. Ahmadianfar, O. Bozorg-Haddad, and X. Chu, “Gradient-based optimizer: A new


metaheuristic optimization algorithm,” Inf. Sci. (Ny)., vol. 540, 2020, doi:
10.1016/j.ins.2020.06.037.

[75] S. Mirjalili, “Moth-flame optimization algorithm: A novel nature-inspired heuristic


paradigm,” Knowledge-Based Syst., vol. 89, 2015, doi: 10.1016/j.knosys.2015.07.006.

[76] G. Dhiman and V. Kumar, “Emperor penguin optimizer: A bio-inspired algorithm for
engineering problems,” Knowledge-Based Syst., vol. 159, 2018, doi:
10.1016/j.knosys.2018.06.001.

[77] S. Arora and S. Singh, “Butterfly optimization algorithm: a novel approach for global
optimization,” Soft Comput., vol. 23, no. 3, 2019, doi: 10.1007/s00500-018-3102-4.

74
浙江大学硕士学位论文

[78] M. A. Tawhid and V. Savsani, “A novel multi-objective optimization algorithm based


on artificial algae for multi-objective engineering design problems,” Appl. Intell., vol. 48, no.
10, 2018, doi: 10.1007/s10489-018-1170-x.

[79] N. Mittal, U. Singh, and B. S. Sohi, “Modified Grey Wolf Optimizer for Global
Engineering Optimization,” Appl. Comput. Intell. Soft Comput., vol. 2016, 2016, doi:
10.1155/2016/7950348.

[80] A. Askarzadeh, “A novel metaheuristic method for solving constrained engineering


optimization problems: Crow search algorithm,” Comput. Struct., vol. 169, 2016, doi:
10.1016/j.compstruc.2016.03.001.

[81] A. W. Mohamed, “A novel differential evolution algorithm for solving constrained


engineering optimization problems,” J. Intell. Manuf., vol. 29, no. 3, 2018, doi:
10.1007/s10845-017-1294-6.

[82] S. Shadravan, H. R. Naji, and V. K. Bardsiri, “The Sailfish Optimizer: A novel nature-
inspired metaheuristic algorithm for solving constrained engineering optimization problems,”
Eng. Appl. Artif. Intell., vol. 80, 2019, doi: 10.1016/j.engappai.2019.01.001.

[83] M. A. U. R. Sarker and M. R. Islam, “Performance improvement of superconducting


magnetic energy storage based ACO controlled hybrid micro-grid system,” in 3rd
International Conference on Electrical Information and Communication Technology, EICT
2017, 2018, vol. 2018-January, doi: 10.1109/EICT.2017.8275130.

[84] S. An, S. Yang, S. L. Ho, and P. Ni, “An improved cross-entropy method applied to
inverse problems,” in IEEE Trans. Magn., 2012, vol. 48, no. 2, doi:
10.1109/TMAG.2011.2173303.

[85] W. Zhang, H. Xu, Y. Bai, and S. Yang, “An quantum-inspired evolutionary algorithm
applied to design optimizations of electromagnetic devices,” in International Journal of
Applied Electromagnetics and Mechanics, 2012, vol. 39, no. 1–4, doi: 10.3233/JAE-2012-
1447.

[86] T. C. Bora, L. D. S. Coelho, and L. Lebensztajn, “Bat-inspired optimization approach


for the brushless DC wheel motor problem,” in IEEE Trans. on Magn., 2012, vol. 48, no. 2,
doi: 10.1109/TMAG.2011.2176108.

75
浙江大学硕士学位论文

[87] S. Coco, A. Laudani, F. R. Fulginei, and A. Salvini, “TEAM problem 22 approached


by a hybrid artificial life method,” COMPEL - Int. J. Comput. Math. Electr. Electron. Eng.,
vol. 31, no. 3, pp. 816–826, 2012, doi: 10.1108/03321641211209726.

[88] W. Yang, H. Zhou, and Y. Li, “A quantum-inspired evolutionary algorithm for global
optimizations of inverse problems,” COMPEL - Int. J. Comput. Math. Electr. Electron. Eng.,
vol. 33, no. 1–2, 2014, doi: 10.1108/COMPEL-11-2012-0333.

[89] S. L. Ho, S. Yang, G. Ni, and J. Huang, “A quantum-based particle swarm


optimization algorithm applied to inverse problems,” IEEE Trans. Magn., vol. 49, no. 5,
2013, doi: 10.1109/TMAG.2013.2237760.

[90] Q. Wang, H. Ma, and S. Cao, “A multi-strategy particle swarm optimization algorithm
and its application on hybrid magnetic levitation,” Zhongguo Dianji Gongcheng
Xuebao/Proceedings Chinese Soc. Electr. Eng., vol. 34, no. 30, 2014, doi: 10.13334/j.0258-
8013.pcsee.2014.30.020.

[91] J. Zhao, M. Lin, D. Xu, L. Hao, and W. Zhang, “Vector Control of a Hybrid Axial
Field Flux-Switching Permanent Magnet Machine Based on Particle Swarm Optimization,”
IEEE Trans. Magn., vol. 51, no. 11, 2015, doi: 10.1109/TMAG.2015.2435156.

[92] M. A. A. Chikh, I. Belaidi, S. Khelladi, A. Hamrani, and F. Bakir, “Coupling of


inverse method and cuckoo search algorithm for multiobjective optimization design of an
axial flow pump,” Proc. Inst. Mech. Eng. Part A J. Power Energy, vol. 233, no. 8, 2019, doi:
10.1177/0957650919844112.

[93] S. Khan, S. Yang, and O. Ur Rehman, “A global particle swarm optimization


algorithm applied to electromagnetic design problem,” Int. J. Appl. Electromagn. Mech., vol.
53, no. 3, pp. 451–467, 2017, doi: 10.3233/JAE-160063.

[94] J. H. Lee, J. Y. Song, D. W. Kim, J. W. Kim, Y. J. Kim, and S. Y. Jung, “Particle


swarm optimization algorithm with intelligent particle number control for optimal design of
electric machines,” IEEE Trans. Ind. Electron., vol. 65, no. 2, 2017, doi:
10.1109/TIE.2017.2760838.

[95] S. K. Goudos, Z. D. Zaharis, and K. B. Baltzis, “Particle swarm optimization as


applied to electromagnetic design problems,” Int. J. Swarm Intell. Res., vol. 9, no. 2, pp. 47–
82, 2018, doi: 10.4018/IJSIR.2018040104.

76
浙江大学硕士学位论文

[96] L. Duca and C. Popescu, “Improving pso based algorithms with the domain-shrinking
technique for electromagnetic devices optimization,” UPB Sci. Bull. Ser. C Electr. Eng.
Comput. Sci., vol. 80, no. 1, 2018.

[97] O. U. Rehman, S. Yang, S. Khan, and S. U. Rehman, “A Quantum Particle Swarm


Optimizer with Enhanced Strategy for Global Optimization of Electromagnetic Devices,”
IEEE Trans. Magn., vol. 55, no. 8, pp. 1–4, 2019, doi: 10.1109/TMAG.2019.2913021.

[98] C. T. Krasopoulos, M. E. Beniakar, and A. G. Kladas, “Robust Optimization of High-


Speed PM Motor Design,” IEEE Trans. Magn., vol. 53, no. 6, 2017, doi:
10.1109/TMAG.2017.2660238.

[99] Q. Zhou et al., “A robust optimization approach based on multi-fidelity metamodel,”


Struct. Multidiscip. Optim., vol. 57, no. 2, 2018, doi: 10.1007/s00158-017-1783-4.

[100] Y. Zhang, X. Ai, J. Wen, J. Fang, and H. He, “Data-Adaptive Robust Optimization
Method for the Economic Dispatch of Active Distribution Networks,” IEEE Trans. Smart
Grid, vol. 10, no. 4, 2019, doi: 10.1109/TSG.2018.2834952.

[101] P. Di Barba, M. E. Mognaschi, G. M. Lozito, A. Salvini, F. Dughiero, and I. E. Sieni,


“The Benchmark TEAM Problem for Multi-Objective Optimization Solved with CFSO,”
In 2018 IEEE 4th International Forum on Research and Technology for Society and Industry
(RTSI), pp. 1-5. IEEE, 2018, doi: 10.1109/RTSI.2018.8548364.

[102] X. W. Zhang, H. Liu, and L. P. Tu, “A modified particle swarm optimization for
multimodal multi-objective optimization,” Eng. Appl. Artif. Intell., vol. 95, 2020, doi:
10.1016/j.engappai.2020.103905.

[103] V. Trivedi, P. Varshney, and M. Ramteke, “A simplified multi-objective particle


swarm optimization algorithm,” Swarm Intell., vol. 14, no. 2, 2020, doi: 10.1007/s11721-019-
00170-1.

[104] X. F. Liu, Y. R. Zhou, and X. Yu, “Cooperative particle swarm optimization with
reference-point-based prediction strategy for dynamic multiobjective optimization,” Appl.
Soft Comput. J., vol. 87, 2020, doi: 10.1016/j.asoc.2019.105988.

[105] B. Tang, K. Xiang, and M. Pang, “An integrated particle swarm optimization approach
hybridizing a new self-adaptive particle swarm optimization with a modified differential

77
浙江大学硕士学位论文

evolution,” Neural Comput. Appl., vol. 32, no. 9, pp. 4849–4883, 2020, doi: 10.1007/s00521-
018-3878-2.

[106] E. Mirsadeghi and S. Khodayifar, “Hybridizing particle swarm optimization with


simulated annealing and differential evolution,” Cluster Comput., vol. 24, no. 2, 2021, doi:
10.1007/s10586-020-03179-y.

[107] S. Sun, H. Zhang, X. Cui, L. Dong, M. S. Khan, and X. Fang, “Multibyte


electromagnetic analysis based on particle swarm optimization algorithm,” Appl. Sci., vol. 11,
no. 2, 2021, doi: 10.3390/app11020839.

[108] Z. Wang, Q. Luo, and Y. Zhou, “Hybrid metaheuristic algorithm using butterfly and
flower pollination base on mutualism mechanism for global optimization problems,” Eng.
Comput., vol. 37, no. 4, 2021, doi: 10.1007/s00366-020-01025-8.

[109] A. H. Gandomi, X. S. Yang, S. Talatahari, and A. H. Alavi, “Metaheuristic


Algorithms in Modeling and Optimization,” Metaheuristic Applications in Structures and
Infrastructures, 2013.

[110] R. Martí and G. Reinelt, “The linear ordering problem: Exact and heuristic methods in
combinatorial optimization,” Applied Mathematical Sciences (Switzerland), vol. 175. 2011,
doi: 10.1007/978-3-642-16729-4_1.

[111] P. H. V. B. da Silva, E. Camponogara, L. O. Seman, G. V. González, and V. R. Q.


Leithardt, “Decompositions for mpc of linear dynamic systems with activation constraints,”
Energies, vol. 13, no. 21, 2020, doi: 10.3390/en13215744.

[112] M. Dorigo and T. Stützle, “Ant colony optimization: Overview and recent advances,”
in International Series in Operations Research and Management Science, vol. 272, 2019.

[113] H. A. Mohamed Shaffril, S. F. Samsuddin, and A. Abu Samah, “The ABC of


systematic literature review: the basic methodological guidance for beginners,” Qual. Quant.,
vol. 55, no. 4, 2021, doi: 10.1007/s11135-020-01059-6.

[114] S. L. Tilahun, J. M. T. Ngnotchouye, and N. N. Hamadneh, “Continuous versions of


firefly algorithm: a review,” Artif. Intell. Rev., vol. 51, no. 3, 2019, doi: 10.1007/s10462-017-
9568-0.

[115] M. Guerrero-Luis, F. Valdez, and O. Castillo, “A Review on the Cuckoo Search


Algorithm,” Studies in Computational Intelligence, vol. 940, 2021.

78
浙江大学硕士学位论文

[116] X. S. Yang, S. Deb, Y. X. Zhao, S. Fong, and X. He, “Swarm intelligence: past,
present and future,” Soft Comput., vol. 22, no. 18, 2018, doi: 10.1007/s00500-017-2810-5.

[117] S. Aslan, H. Badem, and D. Karaboga, “Improved quick artificial bee colony (iqABC)
algorithm for global optimization,” Soft Comput., vol. 23, no. 24, 2019, doi: 10.1007/s00500-
019-03858-y.

[118] D. Kumar, B. G. R. Gandhi, and R. K. Bhattacharjya, “Firefly Algorithm and Its


Applications in Engineering Optimization,” Modeling and Optimization in Science and
Technologies, vol. 16, 2020.

[119] M. G. H. Omran and S. Al-Sharhan, “Improved continuous Ant Colony Optimization


algorithms for real-world engineering optimization problems,” Eng. Appl. Artif. Intell., vol.
85, 2019, doi: 10.1016/j.engappai.2019.08.009.

[120] M. N. Ab Wahab, S. Nefti-Meziani, and A. Atyabi, “A comprehensive review of


swarm optimization algorithms,” PLoS One, vol. 10, no. 5, p. e0122827, 2015, doi:
10.1371/journal.pone.0122827.

[121] J. Zhang, M. Xiao, L. Gao, and Q. Pan, “Queuing search algorithm: A novel
metaheuristic algorithm for solving engineering optimization problems,” Appl. Math. Model.,
vol. 63, pp. 464–490, 2018, doi: 10.1016/j.apm.2018.06.036.

[122] H. Duan, M. Huo, and Y. Deng, “Cauchy-Gaussian pigeon-inspired optimisation for


electromagnetic inverse problem,” Int. J. Bio-Inspired Comput., vol. 17, no. 3, 2021, doi:
10.1504/ijbic.2021.10037531.

[123] F. van den Bergh, "an analysis of particle swarm optimizers" PhD thesis, South
Africa Univ. Pretoria, vol. 200, no. November, 2001.

[124] C. Reynolds, "Flocks, herds and schools: A distributed behavioral model", ACM
SIGGRAPH Computer Graphics, vol. 21, no. 4, pp. 25-34, 1987. Available:
10.1145/37402.37406.

[125] M. Clerc and J. Kennedy, "The particle swarm - explosion, stability, and convergence
in a multidimensional complex space", IEEE Trans. Evol. Comput., vol. 6, no. 1, pp. 58-73,
2002. Available: 10.1109/4235.985692.

79
浙江大学硕士学位论文

[126] Y. S. and R. Eberhart, “A modified Particle Swarm Optimizer,” in 1998 IEEE


International Conference on Evolutionary Computation Proceedings. IEEE World Congress
on Computational Intelligence (Cat. No.98TH8360), 1998, pp. 69–73.

[127] A. Carlisle and G. Dozier, “An Off-The-Shelf PSO,” Proc. Work. Part. swarm Optim.
(Indianapolis, IN), 2001.

[128] E. Ozcan and C. K. Mohan, “Analysis of a simple particle swarm optimization


system,” Intell. Eng. Syst. Through Artif. Neural Networks, vol. 1998, 1998.

[129] J. C. Bansal, P. K. Singh, M. Saraswat, A. Verma, S. S. Jadon, and A. Abraham,


“Inertia weight strategies in particle swarm optimization,” In 2011 Third world congress on
nature and biologically inspired computing, IEEE, 2011, pp. 633-640. doi:
10.1109/NaBIC.2011.6089659.

[130] S. Medasani and Y. Owechko, “Possibilistic particle swarms for optimization”


In Applications of neural networks and machine learning in image processing IX,
International Society for Optics and Photonics, vol. 5673, pp. 82-89. 2005, vol. 5673, doi:
10.1117/12.588353.

[131] F van den Bergh and AP Engelbrecht, "A New Locally Convergent Particle Swarm
Optimizer" In IEEE International conference on systems, man and cybernetics, IEEE, vol. 3,
pp. 6-pp. 2002.

[132] I. C. Trelea, “The particle swarm optimization algorithm: Convergence analysis and
parameter selection,” Inf. Process. Lett., vol. 85, no. 6, 2003, doi: 10.1016/S0020-
0190(02)00447-7.

[133] K. Jin’no, "A Novel Deterministic Particle Swarm Optimization System", Jornal of
Signal Processing, vol. 13, no. 6, pp. 507-513, November 2009.

[134] H. M. Emara and H. A. A. Fattah, “Continuous swarm optimization technique with


stability analysis,” in Proceedings of the American Control Conference, 2004, vol. 3, doi:
10.1109/ACC.2004.182533.

[135] E. García-Gonzalo and J. L. Fernández-Martínez, “Convergence and stochastic


stability analysis of particle swarm optimization variants with generic parameter
distributions,” Appl. Math. Comput., vol. 249, 2014, doi: 10.1016/j.amc.2014.10.066.

80
浙江大学硕士学位论文

[136] Z. H. Zhan, J. Zhang, Y. Li, and H. S. H. Chung, “Adaptive particle swarm


optimization,” IEEE Trans. Syst. Man, Cybern. Part B Cybern., vol. 39, no. 6, pp. 1362–1381,
2009, doi: 10.1109/TSMCB.2009.2015956.

[137] D. H. Wolpert and W. G. Macready, “No free lunch theorems for optimization,” IEEE
Trans. Evol. Comput., vol. 1, no. 1, 1997, doi: 10.1109/4235.585893.

[138] C. García-Martínez, F. J. Rodriguez, and M. Lozano, “Arbitrary function optimisation


with metaheuristics,” Soft Comput., vol. 16, no. 12, 2012, doi: 10.1007/s00500-012-0881-x.

[139] J. L. Fernández-Martínez, E. García-Gonzalo, and J. P. Fernández-Alvarez,


“Theoretical analysis of particle swarm trajectories through a mechanical analogy,” Int. J.
Comput. Intell. Res., vol. 4, no. 2, 2008, doi: 10.5019/j.ijcir.2008.129.

[140] B. Al-Kazemi and C. K. Mohan, “Multi-phase generalization of the particle swarm


optimization algorithm,” in Proceedings of the 2002 Congress on Evolutionary Computation,
CEC 2002, 2002, vol. 1, doi: 10.1109/CEC.2002.1006283.

[141] T. O. Ting, M. V. C. Rao, C. K. Loo, and S. S. Ngu, “A new class of operators to


accelerate particle swarm optimization,” in 2003 Congress on Evolutionary Computation,
CEC 2003 - Proceedings, 2003, vol. 4, doi: 10.1109/CEC.2003.1299389.

[142] Z. H. Zhan and J. Zhang, “Orthogonal learning particle swarm optimization for power
electronic circuit optimization with free search range,” In 2011 IEEE Congress of
Evolutionary Computation (CEC), IEEE, 2011, pp. 2563-2570. doi:
10.1109/CEC.2011.5949937.

[143] A. M. Abdelbar, S. Abdelshahid, and D. C. Wunsch, “Fuzzy PSO: A generalization of


particle swarm optimization,” in Proceedings of the International Joint Conference on Neural
Networks, 2005, vol. 2, doi: 10.1109/IJCNN.2005.1556004.

[144] A. Leontitsis, D. Kontogiorgos, and J. Pagge, “Repel the swarm to the optimum!,”
Appl. Math. Comput., vol. 173, no. 1, 2006, doi: 10.1016/j.amc.2005.04.004.

[145] K. Tatsumi, T. Ibuki, and T. Tanino, “A chaotic particle swarm optimization


exploiting a virtual quartic objective function based on the personal and global best
solutions,” Appl. Math. Comput., vol. 219, no. 17, 2013, doi: 10.1016/j.amc.2013.03.029.

81
浙江大学硕士学位论文

[146] L. dos S. Coelho and C. S. Lee, “Solving economic load dispatch problems in power
systems using chaotic and Gaussian particle swarm optimization approaches,” Int. J. Electr.
Power Energy Syst., vol. 30, no. 5, 2008, doi: 10.1016/j.ijepes.2007.08.001.

[147] K. Tatsumi, T. Ibuki, and T. Tanino, “Particle swarm optimization with stochastic
selection of perturbation-based chaotic updating system,” Appl. Math. Comput., vol. 269,
2015, doi: 10.1016/j.amc.2015.07.098.

[148] J. Kennedy, "Bare bones particle swarms" In Proceedings of the 2003 IEEE Swarm
Intelligence Symposium. SIS'03 (Cat. No. 03EX706), IEEE, 2003, pp. 80-87. doi:
10.1109/SIS.2003.1202251.

[149] T. J. Richer and T. M. Blackwell, "The Lévy particle swarm" In 2006 IEEE
International Conference on Evolutionary Computation, IEEE, 2006, pp. 808-815. doi:
10.1109/cec.2006.1688394.

[150] A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, “Self-organizing hierarchical


particle swarm optimizer with time-varying acceleration coefficients,” IEEE Trans. Evol.
Comput., vol. 8, no. 3, pp. 240–255, 2004, doi: 10.1109/TEVC.2004.826071.

[151] S. L. Ho, S. Yang, G. Ni, E. W. C. Lo, and H. C. Wong, “A particle swarm


optimization-based method for multiobjective design optimizations,” in IEEE Trans. on
Magn., 2005, vol. 41, no. 5, doi: 10.1109/TMAG.2005.846033.

[152] G. Ardizzon, G. Cavazzini, and G. Pavesi, “Adaptive acceleration coefficients for a


new search diversification strategy in particle swarm optimization algorithms,” Inf. Sci. (Ny).,
vol. 299, 2015, doi: 10.1016/j.ins.2014.12.024.

[153] Q. Fan and X. Yan, “Self-adaptive particle swarm optimization with multiple velocity
strategies and its application for p-Xylene oxidation reaction process optimization,” Chemom.
Intell. Lab. Syst., vol. 139, 2014, doi: 10.1016/j.chemolab.2014.09.002.

[154] R. Roy and S. P. Ghoshal, “A novel crazy swarm optimized economic load dispatch
for various types of cost functions,” Int. J. Electr. Power Energy Syst., vol. 30, no. 4, 2008,
doi: 10.1016/j.ijepes.2007.07.007.

[155] T. Peram, K. Veeramachaneni, and C. K. Mohan, “Fitness-distance-ratio based


particle swarm optimization,” In Proceedings of the 2003 IEEE Swarm Intelligence

82
浙江大学硕士学位论文

Symposium. SIS'03 (Cat. No. 03EX706), pp. 174-181. IEEE, 2003, doi:
10.1109/SIS.2003.1202264.

[156] C. Li, S. Yang, and T. T. Nguyen, “A self-learning particle swarm optimizer for global
optimization problems,” IEEE Trans. Syst. Man, Cybern. Part B Cybern., vol. 42, no. 3, 2012,
doi: 10.1109/TSMCB.2011.2171946.

[157] Y. Lu, N. Zeng, Y. Liu, and N. Zhang, “A hybrid Wavelet Neural Network and
Switching Particle Swarm Optimization algorithm for face direction recognition,”
Neurocomputing, vol. 155, 2015, doi: 10.1016/j.neucom.2014.12.026.

[158] S. Sengupta, S. Basak, and R. Peters, “Particle Swarm Optimization: A Survey of


Historical and Recent Developments with Hybridization Perspectives,” Mach. Learn. Knowl.
Extr., vol. 1, no. 1, 2018, doi: 10.3390/make1010010.

[159] A. Meng, Z. Li, H. Yin, S. Chen, and Z. Guo, “Accelerating particle swarm
optimization using crisscross search,” Inf. Sci. (Ny)., vol. 329, 2016, doi:
10.1016/j.ins.2015.08.018.

[160] R. Boks, H. Wang, and T. Bäck, “A modular hybridization of particle swarm


optimization and differential evolution,” 2020, doi: 10.1145/3377929.3398123.

[161] A. Rathore and H. Sharma, “Review on inertia weight strategies for particle swarm
optimization,” in Advances in Intelligent Systems and Computing, 2017, vol. 547, doi:
10.1007/978-981-10-3325-4_9.

[162] C. Yang, W. Gao, N. Liu, and C. Song, “Low-discrepancy sequence initialized particle
swarm optimization algorithm with high-order nonlinear time-varying inertia weight,” Appl.
Soft Comput. J., vol. 29, 2015, doi: 10.1016/j.asoc.2015.01.004.

[163] S. Alam, G. Dobbie, Y. S. Koh, P. Riddle, and S. Ur Rehman, “Research on particle


swarm optimization based clustering: A systematic review of literature and techniques,”
Swarm and Evolutionary Computation, vol. 17. 2014, doi: 10.1016/j.swevo.2014.02.001.

[164] S. Sakamoto, T. Oda, M. Ikeda, L. Barolli, and F. Xhafa, “Implementation and


evaluation of a simulation system based on particle swarm optimisation for node placement
problem in wireless mesh networks,” Int. J. Commun. Networks Distrib. Syst., vol. 17, no. 1,
2016, doi: 10.1504/IJCNDS.2016.077935.

83
浙江大学硕士学位论文

[165] C. F. Wang and K. Liu, “A Novel Particle Swarm Optimization Algorithm for Global
Optimization,” Comput. Intell. Neurosci., vol. 2016, 2016, doi: 10.1155/2016/9482073.

[166] M. Mafarja, R. Jarrar, S. Ahmad, and A. A. Abusnaina, “Feature selection using


Binary Particle Swarm optimization with time varying inertia weight strategies,” 2018, doi:
10.1145/3231053.3231071.

[167] A. B. Hashemi and M. R. Meybodi, “A note on the learning automata based


algorithms for adaptive parameter selection in PSO,” Appl. Soft Comput. J., vol. 11, no. 1,
2011, doi: 10.1016/j.asoc.2009.12.030.

[168] Y. Yan, R. Zhang, J. Wang, and J. Li, “Modified PSO algorithms with ‘Request and
Reset’ for leak source localization using multiple robots,” Neurocomputing, vol. 292, 2018,
doi: 10.1016/j.neucom.2018.02.078.

[169] M. Imran, R. Hashim, and N. E. A. Khalid, “An overview of particle swarm


optimization variants,” Procedia Eng., vol. 53, no. 1, pp. 491–496, 2013, doi:
10.1016/j.proeng.2013.02.063.

[170] K. E. Parsopoulos and Vrahatis M. N., “Initializing the particle swarm optimizer using
the nonlinear simplex method,” Adv. Intell. Syst. fuzzy Syst. Evol. Comput., 2002

[171] S. Hashem Zadeh, S. Khorashadizadeh, M. Fateh and M. Hadadzarif, "Optimal sliding


mode control of a robot manipulator under uncertainty using PSO", Nonlinear Dynamics, vol.
84, no. 4, pp. 2227-2239, 2016.

[172] J. Robinson, S. Sinton, and Y. Rahmat-Samii, “Particle swarm, genetic algorithm, and
their hybrids: Optimization of a profiled corrugated horn antenna,” in IEEE Antennas and
Propagation Society, AP-S International Symposium (Digest), 2002, vol. 1, doi:
10.1109/aps.2002.1016311.

[173] N. Q. Uy, N. X. Hoai, R. I. McKay, and P. M. Tuan, “Initialising PSO with


randomised low-discrepancy sequences: The comparative results,” In 2007 IEEE Congress on
Evolutionary Computation, pp. 1985-1992. IEEE, 2007, doi: 10.1109/CEC.2007.4424717.

[174] Z. Beheshti and S. M. Shamsuddin, “Non-parametric particle swarm optimization for


global optimization,” Appl. Soft Comput. J., vol. 28, 2015, doi: 10.1016/j.asoc.2014.12.015.

84
浙江大学硕士学位论文

[175] D. Bratton and J. Kennedy, “Defining a standard for particle swarm optimization,”
Proc. 2007 IEEE Swarm Intell. Symp. SIS 2007, no. Sis, pp. 120–127, 2007, doi:
10.1109/SIS.2007.368035.

[176] J. Kennedy and R. Mendes, “Population structure and particle swarm performance,” in
Proceedings of the 2002 Congress on Evolutionary Computation, CEC 2002, 2002, vol. 2,
doi: 10.1109/CEC.2002.1004493.

[177] R. Mendes, “Neighborhood topologies in fully informed and best-of-neighborhood


particle swarms,” IEEE Trans. Syst. Man Cybern. Part C Appl. Rev., vol. 36, no. 4, 2006, doi:
10.1109/TSMCC.2006.875410.

[178] J. Kennedy, “Small worlds and mega-minds: Effects of neighborhood topology on


particle swarm performance,” in Proceedings of the 1999 Congress on Evolutionary
Computation, CEC 1999, 1999, vol. 3, doi: 10.1109/CEC.1999.785509.

[179] J. A. Vasconcelos, J. A. Ramirez, R. H. C. Takahashi and R. R. Saldanha,


"Improvements in genetic algorithms", IEEE Trans. Magn., vol. 37, pp. 3566-3569, Sept.
2001.

[180] M. S. Arumugam, M. V. C. Rao, and A. W. C. Tan, “A novel and effective particle


swarm optimization like algorithm with extrapolation technique,” Appl. Soft Comput. J., vol.
9, no. 1, pp. 308–320, 2009, doi: 10.1016/j.asoc.2008.04.016.

[181] S. Kiranyaz, T. Ince, A. Yildirim, and M. Gabbouj, “Fractional particle swarm


optimization in multidimensional search space,” IEEE Trans. Syst. Man, Cybern. Part B
Cybern., vol. 40, no. 2, pp. 298–319, 2010, doi: 10.1109/TSMCB.2009.2015054.

[182] H. Gao, S. Kwong, J. Yang, and J. Cao, “Particle swarm optimization based on
intermediate disturbance strategy algorithm and its application in multi-threshold image
segmentation,” Inf. Sci. (Ny)., vol. 250, pp. 82–112, 2013, doi: 10.1016/j.ins.2013.07.005.

[183] J. H. and R. G. H. Y. del Valle, G. K. Venayagamoorthy, S. Mohagheghi, “Particle


Swarm Optimization: Basic Concepts, Variants and Applications in Power Systems,” IEEE
Trans. Evol. Comput., vol. 12, no. 2, pp. 171–195, 2008.

[184] M. R. Bonyadi and Z. Michalewicz, “Particle swarm optimization for single objective
continuous space problems: A review,” Evol. Comput., vol. 25, no. 1, pp. 1–54, 2017, doi:
10.1162/EVCO_r_00180.

85
浙江大学硕士学位论文

[185] J. C. Bansal, Evolutionary and Swarm Intelligence Algorithms, vol. 779. 2019.

[186] D. Gong, L. Lu, and M. Li, “Robot path planning in Uncertain environments based on
particle swarm optimization,” 2009 IEEE Congr. Evol. Comput. CEC 2009, pp. 2127–2134,
2009, doi: 10.1109/CEC.2009.4983204.

[187] Q. Bai, “Analysis of particle swarm optimization algorithm,” Comput. Inf. Sci., vol. 3,
no. 1, pp. 180–184, 2010.

[188] V. K. Pathak and A. K. Singh, “Form Error Evaluation of Noncontact Scan Data
Using Constriction Factor Particle Swarm Optimization,” J. Adv. Manuf. Syst., vol. 16, no. 3,
pp. 205–226, 2017, doi: 10.1142/S0219686717500135.

[189] D. Wu, N. Jiang, W. Du, K. Tang, and X. Cao, “Particle Swarm Optimization with
Moving Particles on Scale-Free Networks,” IEEE Trans. Netw. Sci. Eng., vol. 7, no. 1, pp.
497–506, 2020, doi: 10.1109/TNSE.2018.2854884.

[190] V. K. Pathak and A. K. Singh, “A particle swarm optimization approach for


minimizing GD&T error in additive manufactured parts: PSO based GD&T minimization,”
Int. J. Manuf. Mater. Mech. Eng., vol. 7, no. 3, pp. 69–80, 2017, doi:
10.4018/IJMMME.2017070104.

[191] J. Baizhuang, “Improved PSO algorithm based on cosine functions and its
simulation,” Journal of Computer Applications, 2013.

[192] S. Fan and Y. Chiu, "A decreasing inertia weight particle swarm optimizer",
Engineering Optimization, vol. 39, no. 2, pp. 203-228, 2007.

[193] R. A. Krohling, “Gaussian swarm: A novel particle swarm optimization algorithm,”


In IEEE Conference on Cybernetics and Intelligent Systems, 2004. (Vol. 1, pp. 372-376),
IEEE, 2004, doi: 10.1109/iccis.2004.1460443.

[194] C. X. and Y. J. Y. Xuerong, C. Hao, L. Huimin, “Multi-Objective Optimization


Design for Electromagnetic Devices With Permanent Magnet Based on Approximation Model
and Distributed Cooperative Particle Swarm Optimization Algorithm,” IEEE Trans. Magn.,
vol. 54, no. 3, pp. 1–5, 2018.

[195] P. Dong, Jianyang; Shiyou Yang; Ni, Gangzheng; Ni, “An Improved Particle Swarm
Optimization Algorithm for Global Optimizations of Electromagnetic Devices,” Int. J. Appl.
Electromagn. Mech., vol. 25, no. 1–4, pp. 723–728, 2007.

86
浙江大学硕士学位论文

[196] P. G. Alotto, U. Baumgartner, F. Freschi, M. Jaindl, A. Kostinger, C. Magele, et al.,


SMES Optimization Benchmark: TEAM Workshop Problem 22, Graz, Austria, 2008. [online]
Available: http://compumag.org/jsite/images/stories/TEAM/problem22.pdf

[197] X. Tao, W. Guo, Q. Li, C. Ren, and R. Liu, “Multiple scale self-adaptive cooperation
mutation strategy-based particle swarm optimization,” Appl. Soft Comput. J., vol. 89, no. 1, p.
106124, 2020, doi: 10.1016/j.asoc.2020.106124.

[198] C. Du, Z. Yin, Y. Zhang, J. Liu, X. Sun, and Y. Zhong, “Research on Active
Disturbance Rejection Control With Parameter Autotune Mechanism for Induction Motors
Based on Adaptive Particle Swarm Optimization Algorithm With Dynamic Inertia Weight,”
IEEE Trans. Power Electron., vol. 34, no. 3, pp. 2841–2855, 2019, doi:
10.1109/TPEL.2018.2841869.

[199] V. K. Pathak, S. Kumar, C. Nayak, and N. Gowripathi Rao, “Evaluating Geometric


Characteristics of Planar Surfaces using Improved Particle Swarm Optimization,” Meas. Sci.
Rev., vol. 17, no. 4, pp. 187–196, 2017, doi: 10.1515/msr-2017-0022.

[200] X. Wang, G. Wang, and Y. Wu, “An Adaptive Particle Swarm Optimization for
Underwater Target Tracking in Forward Looking Sonar Image Sequences,” IEEE Access, vol.
6, pp. 46833–46843, 2018, doi: 10.1109/ACCESS.2018.2866381.

[201] B. F. Gumaida and J. Luo, “A hybrid particle swarm optimization with a variable
neighborhood search for the localization enhancement in wireless sensor networks,” Appl.
Intell., vol. 49, no. 10, pp. 3539–3557, 2019, doi: 10.1007/s10489-019-01467-8.

[202] V. K. Pathak and A. K. Singh, “Optimization of morphological process parameters in


contactless laser scanning system using modified particle swarm algorithm,” Meas. J. Int.
Meas. Confed., vol. 109, pp. 27–35, 2017, doi: 10.1016/j.measurement.2017.05.049.

[203] M. He, M. Liu, R. Wang, X. Jiang, B. Liu, and H. Zhou, “Particle swarm optimization
with damping factor and cooperative mechanism,” Appl. Soft Comput. J., vol. 76, pp. 45–52,
2019, doi: 10.1016/j.asoc.2018.11.050.

[204] J. J. Jamian, M. N. Abdullah, H. Mokhlis, M. W. Mustafa, and A. H. A. Bakar,


“Global particle swarm optimization for high dimension numerical functions analysis,” J.
Appl. Math., vol. 2014, 2014, doi: 10.1155/2014/329193.

87
浙江大学硕士学位论文

[205] B. Jana, S. Mitra, and S. Acharyya, “Repository and Mutation based Particle Swarm
Optimization (RMPSO): A new PSO variant applied to reconstruction of Gene Regulatory
Network,” Appl. Soft Comput. J., vol. 74, pp. 330–355, 2019, doi:
10.1016/j.asoc.2018.09.027.

[206] A. R. Jordehi, “Enhanced leader PSO (ELPSO): A new PSO variant for solving global
optimisation problems,” Appl. Soft Comput. J., vol. 26, pp. 401–417, 2015, doi:
10.1016/j.asoc.2014.10.026.

[207] A. A. Karim, N. A. M. Isa, and W. H. Lim, “Modified particle swarm optimization


with effective guides,” IEEE Access, vol. 8, pp. 188699–188725, 2020, doi:
10.1109/ACCESS.2020.3030950.

[208] D. Wang, D. Tan, and L. Liu, “Particle swarm optimization algorithm: an overview,”
Soft Comput., 2018, doi: 10.1007/s00500-016-2474-6.

[209] H. Y. Hwang and J. S. Chen, “Optimized fuel economy control of power-split hybrid
electric vehicle with particle swarm optimization,” Energies, 2020, doi: 10.3390/en13092278.

[210] H. Kariem and S. M. Shaaban, “Energy optimization of an electric car using losses
minimization and intelligent predictive torque control,” J. Algorithms Comput. Technol.,
2020, doi: 10.1177/1748302620966698. Algorithm Best Objective Function value GPSO
0.1287 AMPSO 0.1136 MPSO 0.1356 MPSOED 0.1123 GCMPSO 0.1210 DPSO 0.097

[211] M. G. Lopez, P. Ponce, L. A. Soriano, A. Molina, and J. J. R. Rivas, “A Novel Fuzzy-


PSO Controller for Increasing the Lifetime in Power Electronics Stage for Brushless DC
Drives,” IEEE Access, 2019, doi: 10.1109/ACCESS.2019.2909845.

[212] S. Djemame, M. Batouche, H. Oulhadj, and P. Siarry, “Solving reverse emergence with
quantum PSO application to image processing,” Soft Comput., 2019, doi: 10.1007/s00500-
018-3331-6.

[213] A. Adriansyah, H. Suwoyo, Y. Tian, and C. Deng, “Improving wall-following robot


performance using PID-PSO controller,” J. Teknol., 2019, doi: 10.11113/jt.v81.13098.

[214] S. Patel and R. A. Thakker, “Automatic Circuit Design and Optimization using
Modified PSO Algorithm,” J. Eng. Sci. Technol. Rev., 2016, doi: 10.25103/JESTR.094.27.

88
浙江大学硕士学位论文

[215] S. U. Khan, S. Yang, L. Wang, and L. Liu, “A Modified Particle Swarm Optimization
Algorithm for Global Optimizations of Inverse Problems,” IEEE Trans. Magn., 2016, doi:
10.1109/TMAG.2015.2487678.

[216] R. A. Khan, S. Yang, S. Fahad, S. U. Khan, and Kalimullah, “A Modified Particle


Swarm Optimization with a Smart Particle for Inverse Problems in Electromagnetic Devices,”
IEEE Access, 2021, doi: 10.1109/ACCESS.2021.3095403.

[217] S. Fahad, S. Yang, R. A. Khan, S. Khan, and S. A. Khan, “A multimodal smart


quantum particle swarm optimization for electromagnetic design optimization problems,”
Energies, 2021, doi: 10.3390/en14154613.

[218] R. A. Khan, S. Yang, S. Khan, S. Fahad, and Kalimullah, “A Multimodal Improved


Particle Swarm Optimization for High Dimensional Problems in Electromagnetic Devices,”
Energies, 2021, doi: 10.3390/en14248575.

89

You might also like