You are on page 1of 27

Circuits, Systems, and Signal Processing

https://doi.org/10.1007/s00034-023-02342-1

Multi-objective Hybrid Particle Swarm Optimization and its


Application to Analog and RF Circuit Optimization

Deepak Joshi1 · Satyabrata Dash2 · Sushanth Reddy3 · Rahul Manigilla4 ·


Gaurav Trivedi5

Received: 4 October 2022 / Revised: 26 February 2023 / Accepted: 27 February 2023


© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023

Abstract
The presence of RF components in mixed-signal circuits make it a challenging task
to resolve tradeoffs among performance specifications. In order to ease the process of
circuit design, these tradeoffs are being analyzed using multi-objective optimization
methodologies. This paper presents a hybrid multi-objective optimization framework
(MHPSO), a combination of particle swarm optimization and simulated annealing. The
framework emphasizes on preserving nondominated solutions in an external archive.
The multi-dimensional space excluding the archive is divided into several sub-spaces
according to a velocity-temperature mapping scheme. Further, the solutions in each
sub-space are optimized using simulated annealing for generation of a Pareto front.
The framework is extended by incorporating crowding distance comparison operator
(MHPSO-CD) to maintain nondominated solutions in the archive. The effectiveness of
proposed methodologies is demonstrated for performance space exploration of three

B Deepak Joshi
d.joshi@eced.svnit.ac.in
Satyabrata Dash
satyabrataiitg@gmail.com
Sushanth Reddy
syellare@andrew.cmu.edu
Rahul Manigilla
rahulreddym1004@gmail.com
Gaurav Trivedi
trivedi@iitg.ernet.in

1 Sardar Vallabhbhai National Institute of Technology, Surat, India


2 TSMC, Hsinchu, Taiwan
3 Carnegie Mellon University, Pittsburgh, PA, USA
4 Microsoft, Hyderabad, India
5 Indian Institute of Technology Guwahati, Guwahati, India
Circuits, Systems, and Signal Processing

electronic circuits, i.e., a two-stage operational amplifier, a folded cascode operational


amplifier, and a low noise amplifier with inductive source degeneration. Additionally,
the performance of proposed algorithms (MHPSO, MHPSO-CD) are evaluated on
various test functions, and the results are compared with standard multi-objective
evolutionary algorithms.

Keywords Multi-objective optimization · Particle swarm optimization · Simulated


annealing · Pareto front · Analog circuit optimization · Performance space
exploration

1 Introduction

The development of system-on-chip, consisting analog and digital circuits have


become a crucial area of research due to the ever increasing demand for miniatur-
ization and compact devices. The induction of Internet-of-things (IoT) devices has
made analog circuit design more challenging due to low power requirement. In the
modern era, the design specifications of analog circuit are becoming more demand-
ing with the advancements in IC fabrication techniques. Analog circuits consume a
significant part of resources (knowledge intensive design efforts) and time, during a
system design process. Therefore, the automation of analog circuit design is gain-
ing importance to achieve time to-market goals. The design methodology for analog
and RF circuit sizing has evolved from “pen and paper” approach to an optimization
scheme via simulation-based approach. In general, analog circuit designers derive
design equations for circuit design (sizing) and solve them for a given set of specifi-
cations. Recently, various numerical simulation methods [26] are presented to review
circuit performance. The optimization schemes based on the memetic algorithm [42]
and the gradient descent-based algorithms [6, 46] are effectively applied in the area
of controller design. The significant challenges in developing the circuit optimization
algorithms are listed below and discussed later in this section.
– Trade-offs / conflicting design objectives: In the analog circuit optimization pro-
cess, the design objectives are interdependent and hence can’t be maximized or
minimized independently. In such cases, optimization results in a group of non-
dominating solutions (Pareto Front). Furthermore, generating a diverse and dense
Pareto front is a challenging task.
– Global solution: When the circuit design problem is modeled as a mathematical
optimization problem, the methods must be chosen based on how well they avoid
local traps and produce a global solution.
– Convergence: The time to market of any integrated circuit or system is dominated
by the analog IC design process. It is crucial to select an algorithm that can reduce
the convergence time.
With the advent of new methodologies, development of efficient circuit simula-
tors has become essential for analog design process-flow. The performance evaluation
of analog circuits, which undergoes a complicated design process empowers circuit
simulators to iteratively modify design variables through the use of optimization tech-
Circuits, Systems, and Signal Processing

niques [37, 50]. However, due to scale down of technology, it is necessary to analyze
tradeoffs among various design specifications simultaneously to meet a certain pro-
cess yield. Hence, there is a growing need for efficient multi-objective optimization
methodologies in automated design process environment as performance evaluation
tools.
Performance evaluation of a circuit is carried out by using either topology specific
fixed design plan [34, 35] or performing random numerical simulations during opti-
mization [44]. Fixed design plan emphasizes on solving a set of equations for a given
set of design variables equivalent to both circuit performance and circuit characteris-
tics, i.e., cost function formulation. In order to facilitate the process of optimization,
sometimes first-order expressions are retained for consideration, and the rest are dis-
carded. Although the computational effort is less, omitting higher order terms during
cost function formulation leads to issues related to reliability and robustness. On the
other hand, taking random numerical steps in a nature-inspired pattern [1, 3, 34, 35,
47] with associative features (advanced device models) to carry out optimization intro-
duces flexibility in exploring design space to preserve diversity and in achieving the
optimum solution with minimum computational effort.
Further, with more than one cost functions to be analyzed simultaneously, the design
space becomes more diversified with multiple design variables bounded by constraints
and the complexity increases with increase in the number of design variables [25]. As
two cost functions are competing to achieve optimality, transforming the problem into
single objective optimization problem is complicated and finding an optimal solution
is rather difficult [33]. To address this issue, several methods have been suggested
for modeling design space [3, 40]. These models aim to capture the prominent func-
tionalities of circuits to speed up the process of optimization. Both stochastic and
deterministic approaches are employed in literature to generate a class of macromod-
els [40, 52]. However, with the reduction in feature size, the effect of design parameters
has become more pronounced, and design of an efficient macromodel of a circuit is
quite challenging and complex. Therefore, it is necessary to have an efficient system-
level design methodology with adequate functionalities (macromodel) to capture the
performance of a circuit.
In [28], we presented a single objective optimization algorithm that is based on
a hybrid of particle swarm optimization (PSO) and simulated annealing (SA). This
type of hybrid aims to use the PSO algorithm’s fast convergence and the SA’s abil-
ity to avoid getting stuck in local traps. Thus, a simple single-objective optimization
algorithm is presented to optimize analog circuits. Further, it is upgraded for the multi-
objective analog and RF circuit optimization presented in the proposed work. In this
paper, we formulate the tradeoffs among performance specifications as a constrained
driven multi-objective optimization problem. Specifically, we emphasize on nature-
inspired algorithms, i.e., particle swarm optimization (PSO) and simulated annealing
(SA) for the construction of a framework to deal more explicitly with model uncer-
tainties, deviations in responses and specifications, which form the cost functions to
be minimized or maximized. The framework consists of a hybrid metaheuristic, which
is a combination of PSO and SA, is capable of generating feasible performance values
for a given circuit. The proposed design methodology has following key features.
Circuits, Systems, and Signal Processing

– A novel search strategy based on a hybrid combination of particle swarm opti-


mization (PSO) and simulated annealing (SA), for improved optimization.
– A voltage to temperature mapping scheme as an initialization scheme for SA.
– Incorporation of crowding distance scheme as a variant of initial search strategy
for better selection of nondominated solutions.

The framework is developed to explore and construct boundaries of the circuit’s


performance space to retrieve the required set of design variables. Three different
amplifier circuits with compelling design constraints have been designed for an optimal
set of design variables and the Pareto fronts are generated between competing design
specifications (Gain – Power consumption, Gain – Noise figure, etc.) of each circuit.
Further, the viability of nature-inspired algorithms within the proposed framework
is demonstrated by solving various multi-objective test functions and comparing the
performance with other standard optimization algorithms.
The outline of rest of the paper is as follows. A brief review of related works is
presented in Sect. 2. A brief introduction of PSO and SA is presented in Sect. 3. The
proposed framework is presented in Sect. 4. The experimental setup and results for
analog circuits and multi-objective benchmark functions are discussed in Sect. 5. The
paper is concluded in Sect. 6.

2 Related Works

Many research efforts on stochastic search algorithms, especially nature-inspired algo-


rithms have been made to capture the design space boundaries of analog and RF
electronic circuits. Genetic algorithm (GA) is one of the popular optimization rou-
tines, which has been employed for circuit sizing in both industry and academia
[34, 35]. With different circuits, other nature-inspired algorithms, e.g., particle swarm
optimization (PSO)[20], simulated annealing (SA) [24, 53], ant colony optimization
(ACO)[29], etc., have also been applied in the form of design space boundary explo-
ration techniques. For example, these techniques have been applied for optimization
of different digital filters [1, 23, 30, 54]. Several variants of GAs have been employed
to design CMOS operational amplifier circuits with constrained topologies. Genetic
programming is used as an analog synthesis tool in several filter designs and circuit
synthesis problems [34].
However, all these techniques have been centered around in designing a single
cost function instead of an optimally located region spanned by many cost functions
(multi-objective optimization). As circuit modeling is often affected by uncertainties
and errors at many levels, it is necessary to analyze tradeoffs among performance spec-
ifications to avoid any measurement errors and inconsistent solutions during design.
To overcome these problems, several multi-objective techniques have been proposed
with an emphasis on nature-inspired algorithms because of the inherent ability and
efficiency in handling uncertainty during circuit design flow. NSGA-II [16] has been
employed as a leading optimizer in various analog and RF IC synthesis tools over the
years for multi-objective circuit optimization [7, 37, 38]. Several learning strategies [3,
14, 18, 19] have also been introduced as modeling techniques based on the distribution
Circuits, Systems, and Signal Processing

of design parameters, allowing a fast exploration of design space for a set of design
parameters. However, with the induction of GA, convergence rate and accuracy of
the optimizer may be affected by the increasing number of design variables. A hybrid
combined approach of SA and PSO is also suggested to find solutions to cope with
design specifications at circuit-level and exploration of tradeoffs of different analog
circuits [28, 55]. Recently, analog circuit optimization has been attempted by applying
the problem to MOEAD/DE framework [27], and an optimal circuit design has been
attempted without employing any design constraints. Besides these academic endeav-
ors, several commercial design tools have also been introduced for circuit sizing in
the past few years, such as NeoCircuit [8], Genius’s ADA [4], Barcelona Design [17],
etc.

3 Background

Nature-inspired algorithms have become popular among analog circuit designers due
to certainty in finding a solution. The concept of survival of the fittest enables a
detailed selection process for complete exploration of performance space to preserve
the diversity of solutions. The cost functions can be optimized by using nature-inspired
algorithms under a given set of conditions or constraints. In this paper, the optimization
framework is proposed as a combination of particle swarm optimization (PSO) and
simulated annealing (SA). These two algorithms are briefly discussed as follows.

3.1 Particle Swarm Optimization (PSO)

Particle swarm optimization imitates the search pattern of a swarm of bees to find
a place with the highest nectar quantity. An optimization problem with n variables,
a population size of N is generated randomly in the given variable range. In case of
circuit sizing, these variables can be length (L), width (W ), and transconductance (gm )
of a device (transistor). Fitness values are calculated for each particle and are tagged as
pbest and gbest according to the personal and social performance of a particle. The
position of a particle, updated iteratively, depends on the velocity of a particle. Particle
velocity consists of present fitness value, random coefficients, and previous velocity.
For a particle in i th iteration, if position and velocity are xi and vi respectively, the
next position can be expressed as [31],

x(i+1) = xi + v(i+1) , (1)

and velocity can be expressed as,

v(i+1) = wvi + c1r1 (gbest − xi ) + c2 r2 (pbest − xi ), (2)

where, c1 and c2 are acceleration coefficients; r1 and r2 are random numbers generated
uniformly between 0 and 1; and w denotes inertia.
Circuits, Systems, and Signal Processing

3.2 Simulated Annealing (SA)

Simulated Annealing is based on the crystallization (annealing) process in metals.


Local minima of energy in a system are referred to different crystalline states. Simi-
larly, randomly generated particles in multidimensional design space limited by a given
set of circuit design variables, are allowed to cool down to move towards a better solu-
tion or another local minimum, which exhibits a better cost function. A probability
function controls acceptance of a solution with the cost in next iteration. The probabil-
ity of acceptance of such solutions decreases at every iteration. The selection of better
solutions even with a lower probability enables SA to avoid getting trapped in local
minima. For i th iteration, the value of cost function and the corresponding solution
can be expressed as f (xi ) and xi , respectively. Solution in the next iteration (xi+1 )
can be expressed as,
x(i+1) = xi + Δxi , (3)
where, Δxi is the difference between new cost function value and current value, i.e.,
( f (xi+1 ) − f (xi )). In case of negative Δxi , the probability of acceptance is unity
( p = 1), and for positive Δxi , acceptance probability p will be less than unity which
can be expressed as,
p = e−αΔ f , (4)
where, α is annealing parameter which increases iteratively. In general, SA is used as
a complement to other approaches for local refinement of the solutions [24].

4 Hybrid Particle Swarm Optimization (MHPSO)

Although PSO is one of the most popular algorithms for finding optimal regions of
complex search spaces through the interaction of particles in a population, its search
ability and convergence rate are not efficient [10]. Therefore, we propose to improve
the performance of PSO by combining the search strategy with SA to construct a
hybrid framework to enable automation in the circuit design flow. Hence, we present
a voltage-temperature mapping scheme to initiate the search of SA on nondominated
solutions evaluated by using PSO.
Let us consider the i th generation of MHPSO algorithm. Suppose the position of
i
a particle at this generation is xpos and the particle is allowed to move in a search
space of dimension N . The first step is to initialize positions Xpos and velocities Vx
of particles. Both local and global best positions of particles are initialized to their
initial positions to start the optimization process. As each particle continues to move
inside the search space, its velocity and position are updated. Consequently, the local
best positions are updated and incorporated into the framework for evaluating next
position and velocity. Later, the particles having the best global positions are chosen
as nondominated particles, and are stored in an external repository A. Once PSO is
completed, the annealing process is applied to the dominated particles after velocity-
temperature mapping. The entire archive A is divided into sub-spaces having different
temperature coefficients (T1 , T2 , T3 , etc.). Further, the annealing process is carried out
in each subspace to improve the diversity of solutions. The entire process flow of the
Circuits, Systems, and Signal Processing

Fig. 1 Proposed Framework

framework is shown in Fig. 1. Essential implementation steps of the framework are


discussed as follows.

4.1 Initialization and Classification of Population

A population of N particles is randomly generated inside the multidimensional design


space. The position of each particle, x ipos is assigned with random values and the
velocity of each particle, Vxi is initialized with zero. An external archive A is maintained
Circuits, Systems, and Signal Processing

to store nondominated particles and corresponding solutions. The selection procedure


of identifying the nondominated fronts is performed using normal dominance principle
[9]. The operation is carried out till, the size of A ≥ N . If |A| > N , no further swarm
operation is performed and annealing process is initiated.

4.2 Particle Movement and Finding Global Best

Once the positions of particles are initialized, each particle follows a series of trajec-
tory inside the design space. The path of this trajectory is governed by a local search
principle as described in Sect. 3.1. In this method, the stochastically weighted differ-
ence between the neighboring particle’s local best position and individual’s current
i
position xlbest − xpos
i , and difference between the best position and individual’s current

position xgbest − xposi , are added to individual’s current velocity v i to adjust the next
x
time step. The global best position, xgbest is evaluated [10] at the end of new movement
of all particles in the design space, and the formula for finding xgbest , can be expressed
by rewriting (1) as follows:

c1 r1 (xlbest
i − xpos
i ) + c r (x
2 2 gbest − x pos )
i
xgbest ← , (5)
c1r1 + c2 r2

where, the value of x gbest is regularly updated as the system moves towards an optimum
solution. The procedure is presented in Algorithm 1. As there is a random weighting
of the control parameters in (5), particles follow Brownian motion, preventing an
explosion of design space, and supporting possible convergence to local optima in the
most cases.

4.3 Ranking and Velocity-Temperature (V-T) Mapping Scheme

MHPSO uses an external archive A to store the obtained nondominated solutions after
swarm search operation. After the nondominated solutions (δ) are stored in A, rest of
the solutions (N −δ) are ranked according to the distance, evaluated by either Euclidean
distance [13] or crowding distance [16] from nearest reference points in A, instead
of discarding them. The results of both the methods are presented in Sect. 5. In order
to improve diversity, nondominated solutions in A are chosen as reference points for
corresponding nearest dominated solutions. Once the ranking of dominated solutions
is accomplished, corresponding particle is assigned a thermodynamic temperature
depending upon the kinetic motion of that particle in three-dimensional search space.
We analyze the relationship between kinetic energy and velocity, i.e., E k = 1/2mv 2 ,
by analyzing the behavior of each particle whose velocity is modified in each step,
and also relate the translational kinetic energy (E k ) to kinetic temperature (Tk ) by,

3
Ek = k B Tk , (6)
2
Circuits, Systems, and Signal Processing

Algorithm 1 Finding Nondominated Solutions using Swarm Search


Input: P pop , M AX _G E N in
Output: Popt out
1: Generate random population P pop
2: for i → 1 to Psi ze do
3: Initialize Vx [i]
4: Initialize X pos [i]
5: end for
6: Evaluate fitness Px for each particle in P pop
7: Store nondominated solutions to archive A f itness
8: Initialize Pgbest and X gbest
9: for i → 1 to Psi ze do
10: X lbest [i] → X pos [i]
11: end for
12: for i → 1 to M AX _G E N do
13: for i → 1 to Psi ze do
14: Vx [i] ← w × Vx [i] + C1 × (X lbest [i] − X pos [i]) + C2 × (X gbest − X pos [i])
15: X pos [i] ← X pos [i] + Vx [i]
16: Check for out of bound solutions
17: Update A f itness
18: Check and update local best particle X lbest [i]
19: end for
20: Check and update global best particle X gbest and solution Pgbest
21: end for  End of swarm search

where, k B = 1.38065e-23 J/K denotes the Boltzmann constant. At any instant, the
proportion of particles moving at a calculated velocity within the search space can be
evaluated by Maxwell–Boltzmann distribution. The Maxwell–Boltzmann distribution
of particle velocities (for Binh2 function) with respect to corresponding temperature
values for different objective functions are plotted as shown in Fig. 2(a). It can be
observed from Fig. 2(b) that the particles having higher velocity (the normalized values
ranging from 2 to 5 are evaluated from Fig. 2(a)) comply with higher thermodynamic
temperature. Once the particles corresponding to N − δ solutions are scaled according
to the temperature, the solutions are identified according to their assigned ranks. The
entire search space S N −δ is divided into subspaces, st , where t = 1, 2, 3, . . . , M. The
cost functions having equal ranks are placed in the same subspace. However, analyzing
diverged solutions that are far from the Pareto front is not critical. Only the dominant
solutions are considered in each step of the annealing process. In the proposed method,
we have analyzed up to three subspaces by keeping the division of ranks to three parts
(i.e., t = 1, 2 and 3), and solutions with higher rank values have been discarded.

4.4 Annealing Process

Further, the solutions in st are subjected to multi-objective annealing process to


search any remaining nondominant solutions and obtaining a diverse Pareto front.
The annealing process is described using pseudocode in Algorithm 2. It can be
observed from Algorithm 2 that the annealing process operates on probability eval-
Circuits, Systems, and Signal Processing

Fig. 2 a Maxwell Distribution of particles at any instant, relating temperature with velocity, and b rep-
resentation of particles in objective space of Binh2 test function for different temperature regions, i.e.,
T1 < T2 < T3

 i

X pos

uation, pr ob = 1/ 1 + e temp , driven by cooling down of temperature in each

step. Although annealing process becomes slower as it progresses, it firmly avoids the
selection of any localized solutions and accounts for global optimum solutions. Since
annealing process is applied to dominated solutions after swarm search, the proposed

Algorithm 2 Annealing Process


Input: P pop , M AX _G E N in
Output: Ssa out
1: Map particles to different temperature regions (T1 , T2 , T3 , etc.) in the solution space S
2: for i → 1 to Tnum do
3: Initialize current solution Px
4: step ← 0
5: while tr ue do
6: temp ← Ti × e−step×decay Rate
7: if temp < minT emp then
8: break
9: end if
10: for j → 1 to n ⎛ do ⎞
X i
pos
− temp
11: pr ob ← 1/ ⎝1 + e ⎠

12: if rand() < pr ob then


13: X cur [i] ← X pos [i] + X ipos
14: if X cur [i] < X pos [i] then
15: X pos [i] ← X cur [i]
16: end if
17: end if
18: end for
19: Check and update current solution Px
20: step ← step + 1
21: end while
22: end for
Circuits, Systems, and Signal Processing

MHPSO procedure maintains a diversity in the space S N at each generation. This


enables the framework to solve problems having cost functions with different scales.

Algorithm 3 Niching Procedure


Input: ρ j , s j , j ∈ S in
Output: A N −δ out
1: for i → 1 to β do
2: for j → 1 to s j do
j
3: Find tmax for each s j
j
4: Find xcount in each s j
5: end for
6: for j → 1 to s j do
j
7: while xcount  = φ do
j
8: choose xs
9: if ρ j  = φ then
10: A N −δ → A N −δ ∪ Ssa
11: ρj → ρj + 1
j j
12: xcount → xcount − 1
13: end if
14: end while
15: end for
16: end for

4.5 Niche Preservation

While constructing subspaces and performing annealing process in each subspace, it


is worth noting that multiple particles (xcount ) may have the same temperature values
associated with them or no particles have the same temperature (t). As the particles
with high-temperatures (tmax ) require a prolonged annealing process, it is necessary to
have a niche count, ρ j for the j th subspace. The niche-preservation operation starts by
identifying particle with the high-temperature ratings (minimum ρ j ) in any subspace
s j . The niching process is presented in Algorithm 3.
In case of multiple particles having same high-temperature ratings, the particle
j
(xs ) having the shortest Euclidean distance from the nearest nondominated particle
is preferred. The annealing process in subspace s j is carried out till the temperature
associated with the chosen particle is cooled down to minimum temperature. Then,
the niche count ρ j is incremented by one. The subspace with the highest ρ count
is subjected to annealing process to update archive A. Only a few of particles are
subjected to annealing process in the subspace having a low ρ count, i.e., the particles
having very low-temperature ratings are discarded. This procedure is repeated for β
times after all niche counts are updated to fill the vacant spots in archive A N −δ .
Circuits, Systems, and Signal Processing

4.6 Computational Complexity

The swarm search in Algorithm 1 of a population of size Psi ze having M-dimensional


cost functions performed for a maximum generation of G, requires O(Psize M G)
computations [32]. Ranking of dominated solutions in the statement 1 of Algorithm 2
requires Euclidean distance or crowding distance evaluation on a number of population
of size PN −δ . It requires [16] a total of O(M(PN −δ )log(PN −δ )) and O(M(N − δ))
computations to employ crowding distance and Euclidean distance in the ranking
process. Mapping particles to different subspaces (line 2 in Algorithm 2) according to
the temperature values requires O(M(N − δ)) computations. The annealing process
starting from statement 3–23 in Algorithm 2 requires O(t M(Pst )G) computations for t
j
number of subspaces (st ) with a maximum generation of G. Assuming θ j = |xcount | for
subspace s j , statement 7 requires O(θ j ) checking operations. Considering a maximum
of s subspaces and β generations, the worst case niching procedure requires O(sβθ 2 )
computations (assume a maximum of θ niching count for each subspace) niche count
evaluation.

5 Experiments

5.1 Setup

1. Peer Algorithms: The performances of proposed MHPSO/MHPSO-CD are vali-


dated by choosing eight state-of-the-art multi-objective optimization algorithms
(e.g., NSGA-II [16], NSGA-II-DE [56], MOEA/D-DE [39], PESA-II [12], SPEA-
II [58], DMOPSO [41], OMOPSO [48], and SMPSO [43]) as peer algorithms.
Author’s implementations of these algorithms are considered to obtain the
solutions of test functions. Algorithms such as MOEA/D-DE, SPEA-II, and
NSGA-II-DE are originally not developed to solve optimization problems sub-
jected to a set of constraints. In this paper, constraints handling mechanism,
proposed in NSGA-II, is used for these algorithms (MOEA/D-DE, SPEA-II, and
NSGA-II-DE) to solve constrained optimization problems. A population size of
100 is considered to evolve over 200 generations for NSGA-II, NSGA-II-DE,
MOEA/D-DE, PESA-II, and SPEA-II, and 2000 function evaluations (genera-
tions) and the population size of 100 are considered for DMOPSO, OMOPSO,
SMPSO, MHPSO, and MHPSO-CD. Further, the Pareto fronts, generated by pro-
posed MHPSO and MHPSO-CD are compared with these peer algorithms.
2. Test Examples: In order to evaluate the proposed MHPSO/MHPSO-CD for the
application of analog circuit optimization, three analog circuits (i.e., two-stage
operational amplifier, folded cascode operational amplifier, and low noise ampli-
fier) are designed using 180 nm CMOS process technology parameters. Further,
various standard test functions, considered in this paper for the evaluation of pro-
posed approaches. Test functions, ZDTi (i = 1, 2, 3, 4, 6) [57], SCH [49], FON
[22], and KUR [36] are unconstrained optimization problems, whereas BINH2
Circuits, Systems, and Signal Processing

[59], CEX [16], OZY [45], and TNK [51] are optimization problems subjected to
constraints.
3. Quality Indicators: Multi-objective optimization algorithms are quantitatively ana-
lyzed by considering two factors. First is about finding closeness of the obtained
solutions with the Pareto front, and second is about finding diverse solutions with
respect to the Pareto front. Therefore, in order to compare multi-objective evolu-
tionary algorithms (MOEA), at least two performance metrics (one evaluating the
closeness of the desired Pareto front, i.e., P , and the other evaluating the spread
of Pareto front obtained) need to be used. Various performance metrics have been
reported [57] in this regard for MOEAs. The metric GD [11] is used to evalu-
ate closeness with the Pareto-optimal front, whereas IGD [21] and hypervolume
[57] provide combined quantitative information about closeness and diversity in
obtained Pareto-fronts. Quality indicators considered in this paper are as follows:
(a) Generational Distance (GD): GD calculates the distance of solutions on
obtained Pareto front from the solutions on a true Pareto front and it can be
expressed as follows: 
n obtained
i=1 (di )2
GD = , (7)
n obtained

where, n obtained is the number of solutions in obtained Pareto front, and di


represents the Euclidean distance between the i th obtained solution and the
closest point of the true Pareto front. Here, the Euclidean distance is calculated
in objective space.
(b) Inverse Generational Distance (IGD): IGD can be expressed as:


n true 2
i=1 di
IGD = , (8)
n true

where, n tr ue is the number of solutions on true Pareto front, and di represents


the Euclidean distance between the i th solutions on the true Pareto front and
the nearest solution on the obtained front.
(c) Hypervolume: This metric is used for quantifying the convergence and diver-
sity of MOEAs. The purpose is to calculate the area/volume of the objective
space, which is covered by the obtained Pareto optimal solutions.
Assuming an individual xi in Pobtained for a two-dimensional multi-objective
optimization problem (MOO) defines a rectangular area, a(xi ), bounded by
an origin and f (xi ). The union of such rectangular areas is referred as Hyper-
area (hypervolume in the case of 3-D Pareto fronts), and in general it is called
hypervolume of Pobtained ,

n

H F(Pobtained ) = a(xi )|∀xi ∈ Pobtained (9)
i
Circuits, Systems, and Signal Processing

Table 1 Simulation settings

Algorithm Parameter settings

NSGA-II N = 300, E = 100000


NSGA-II-DE pc = 0.9, pm = 1.0, ηc = ηm = 20.0
MOEA/D-DE N = 300, E = 150000,
CR = 1.0, F= 0.5, pm = 1.0, ηm = 20.0
PESA-II N = 10, A = 100, Bisection = 5, E = 10000,
pc = 0.9, pm = 1.0, ηc = ηm = 20.0
SPEA-II N = 100, A = 100, E = 10000,
pc = 0.9, pm = 1.0, ηc = ηm = 20.0
DMOPSO Ns = 1000, Max. age = 2, I = 25000,
c1 , c2 , w ∈ [0,1], v1 , v2 =−1.0
OMOPSO Ns = 1000, A = 100, I = 250, η p = 0.5
pm = 1.0, ηm = 20.0, c1 , c2 , w ∈ [0,1]
SMPSO Ns = 100, A = 100, I = 250, pm = 1.0,
ηm = 20.0, c1 , c2 , w ∈ [0,1], v1 , v2 =−1.0
MHPSO Ns = 1000, A = 100, I = 250, α = 0.9
MHPSO-CD min. Temp. = 0.0001, c1 , c2 , w ∈ [0,1]

In[11], Hyperarea Ratio metric is expressed as:

H F(Pobtained )
HR = (10)
H F(Ptrue )

where, H F(Ptrue ) is hyperarea or hypervolume covered by true Pareto front


of the function. Hypervolume metric provides a qualitative measure of conver-
gence as well as diversity in a combined manner. Nevertheless, it can be used
along with GD or IGD metrics to get a better overview of the performance of
algorithm.
4. Simulation Setup: The parameters of peer and proposed algorithms are listed in
Table 1, where, N, Ns , and A denote the population size, swarm size, and archive
size, respectively. Maximum number of iteration and maximum number of evalu-
ations are denoted by I and E, respectively. Mutation and crossover probabilities
are represented by pm and pc , respectively. η p and ηc are distribution index of
mutation and crossover, respectively. Polynomial mutation and SBX crossover
operators are used in algorithms except for MOEAD/D-DE, where differential
evaluation crossover operator is used. Binary Tournament selection strategy is
used in NSGA-II, NSGA-II-DE, and SPEA-II. c1 , c2 , w, v1 , and v2 are the param-
eters used to update velocity in DMOPSO, OMOPSO, SMPSO, and the proposed
MHPSO.
Circuits, Systems, and Signal Processing

5.2 Optimization of Analog Circuit Design

In order to perform optimization of the circuit design (circuit sizing), knowledge of cost
functions (performance specifications as a function of design variables) and the con-
straints (interdependence or tradeoffs between these specifications) is necessary. The
formulation of analytical expressions of the objective functions and constraints regu-
lates the quality of solutions (e.g., the convex formulation provides global solutions).
Moreover, the reliability of optimized solutions depends on the accuracy of mathemati-
cal expressions used for optimization. Over-simplified expressions can introduce errors
in results. In analog circuit design, performance specifications are interdependent and
competitive. Hence, these tradeoffs in analog sizing problems can be efficiently ana-
lyzed by MOO algorithms. Similar to single objective optimization problems, MOO
problem can be expressed as [15]:

minimize f i (x), i = 1, 2, ..., q;


subject to g j (x) ≤ 0, j = 1, 2, ..., m;
(11)
h k (x) = 0, k = 1, 2, ..., p;
xl ∈ X , l = 1, 2, ....n,

where, x is a vector of n variables satisfying all the constraints (g j (x) and h k (x)).
f i (x) is a set of q different objective functions that need to be minimized. Most of the
MOO algorithms are developed for either minimization or maximization.
In this paper, the proposed MHPSO method is applied for three analog circuit sizing
problems. These test problems consist of multiple design variables, multiple nonlinear
constraints and objective functions confining a large search space. These circuits are
represented in the form of multi-objective optimization problems as described in (11).
Performance goals such as gain, power consumption, and noise figure are considered as
objective functions, whereas biasing conditions and circuit parameters (i.e., SR, GBW,
etc.) are considered as constraints. The description of test circuits and generated Pareto
fronts are presented as follows.

5.2.1 Two-stage OpAmp

The operational amplifier is one of the fundamental elements in analog system design
and frequently used as a test circuit for analog circuit design optimization. A two-
stage OpAmp [2] is shown in Fig. 3. The load capacitance C L is set to 10 p F. To
ensure the stable operation of the amplifier, relation between Miller capacitance (Cc )
and load capacitance (Cc > 2.2(gm2 /gm6 )C L ) is maintained. The circuit is designed
using 0.18 μm CMOS process technology, and fixed power supply of 1.8V is used. W
and L for all transistors are kept within [0.24, 1000 μm] and [0.18, 10 μm], respec-
tively. The variation of bias current is kept between 1 μA – 2.5 mA. The appropriate
differential operation and current biasing are ensured by the matching relations (e.g.,
M1≡M2, M3≡M4, and M5≡M7) between transistors. The analytical form of per-
formance specifications [2] are expressed in (12). The design problem is formulated
as a multi-objective optimization problem using the constraints shown in Table 2. A
Circuits, Systems, and Signal Processing

Fig. 3 Operational Amplifier circuit

Table 2 OpAmp: constraints and specifications

Specifications Description Constraints

φ [degree] Phase margin ≥ 60


Pmin [μW] Min. power consumption ≤ 300
S R [A/μF] Slew rate ≥ 20
I C M R [V] Input common mode range [0.8, 1.6]
G BW [MHz] Gain bandwidth product ≤ 30

Y [mV] min − V − I5 − V max
Vin ≥ 100
ss β T1

population size of 100 particles is evolved over 200 generations in the course of the
optimization process. The Pareto fronts are generated to analyze the tradeoff between
gain and power consumption. Further, the comparison between Pareto fronts, gen-
erated by proposed algorithms (MHPSO and MHPSO-CD) and other evolutionary
algorithms are shown in Fig. 6. It can be observed from Fig. 6 that the gain can be
increased at the cost of power consumption. The Pareto front consists of two regions
representing different rates of gain increment with respect to power consumption.
In the first region, the gain is increasing at a slower rate as compared to the rate in
the second region. However, high gain at the cost of large power consumption may
not usually be preferred. The front generated by MHPSO-CD is comparatively more
diverse and dense than the fronts obtained by other standard algorithms.
gm1 gm6
Av =
(gds2 + gds4 ) (gds6 + gds7 )
PDC = vdd (I1 + I2 + I3 ) (12)
gm1
G BW =
2πCc
Circuits, Systems, and Signal Processing

Fig. 4 Folded cascode operational amplifier circuit


Table 3 Folded cascode
Specifications Constraints
amplifier: constraints and
specifications DC gain (dB) ≥55
GBW (MHz) ≥2
Phase margin (◦ ) ≥60
Output swing (V) ≥1.2
V )
Slew rate ( μS ≥1

5.2.2 Folded Cascode OpAmp

The folded cascode topology is commonly used to obtain improved input common-
mode range, self-compensation, and high gain. A folded cascode OpAmp is shown
in Fig. 4. The load capacitance C L is set to 5 p F. The circuit is designed using 0.18
μm CMOS process technology, and a fixed power supply of 1.8V is used. The width
(W) and length (L) of all transistors are kept within [0.24, 1000 μm] and [0.18, 10
μm], respectively. The bias current varies between 1 μA to 2.5 mA. The appropriate
differential operation and current biasing are ensured by the matching relations (e.g.,
M1≡M2, M3≡M4, and M5≡M7) between the transistors. The analytical form of
performance specifications [2] are expressed in (13). The design problem is formulated
as a multi-objective optimization problem using the constraints shown in Table 3. A
population size of 100 particles is evolved over 200 generations in the course of
the optimization process. The Pareto fronts are generated to visualize the tradeoff
between gain and power consumption. Further, the comparison between Pareto fronts,
generated by proposed algorithms (MHPSO and MHPSO-CD) and other evolutionary
algorithms are shown in Fig. 7. It can be observed from Fig. 7 that the increased gain is
obtained at the cost of increased power dissipation. The front generated by MHPSO-
CD is comparatively more diverse and dense as compared to the front generated by
other standard algorithms.
Circuits, Systems, and Signal Processing

Fig. 5 Low noise amplifier circuit

Av = gm2 Rout
Rout = (gm8rds8rds6 )  [gm10 rds10 (rds2  rds4 )]
PDC = vdd (I1 + I2 + I3 + I4 )
gm2 (13)
GBW =
2πC L
I2
SR =
CL

5.2.3 Low Noise Amplifier (LNA)

To demonstrate the effectiveness of proposed optimization technique, a cascode LNA


circuit [5] is considered as a case study. LNA consists of a common source amplifier
with inductive source degeneration and a common gate load. The schematic of LNA,
as shown in Fig. 5, consists of two MOSFETs M1 and M2 and a tuning circuit (Cout
and L out ), which is tuned at a center frequency of 2.4G H z. Both inductors (L g , L s )
are used for matching at the input end. The upper bound of the noise figure (NF) is
kept at 2.5 (0.4dB) with a maximum power consumption of 8mW. The quality factor
of LNA is kept between 3 to 5.
Table 4 summarizes the variables and ranges considered during optimization of
LNA. The details of different specifications and constraints during the optimization
process are shown in Table 5.
A total of 100 populations are evolved over 200 generations to obtain the Pareto
front. Figure 8 shows various optimal Pareto fronts for minimum NF and maximum
gain (S21 parameter) of LNA, obtained by different algorithms. It can be seen from
Circuits, Systems, and Signal Processing

Table 4 LNA: variables and ranges

Variables Description Lower bound Upper bound

W1 , W2 [μm] Width of M1 and M2 1 100


W3 [μm] Width of M3 1 8
Id [mA] Forward drain current 0.1 4.5

Table 5 LNA: constraints and specifications.*Ratio of intrinsic gate capacitance of M1 to total capacitance
(Ctot = C gs + Cext + C p )

Specifications Description Constraints

S11[dB] Input reflection coefficient @ 2.4GHz ≤ −20


S12[dB] Reverse isolation @ 2.4GHz ≤ −50
S21[dB] Forward power gain @ 2.4GHz ≥ 10
S22[dB] Output reflection coefficient @ 2.4GHz ≤ −0.9
N Fmin [dB] Min. noise figure @ 2.4GHz ≤ 2.5
Pmin [mW] Min. power consumption @ 2.4GHz ≤8
f c f [GHz] Center frequency [2.1, 2.7]
C gs /Ctot Capacitance ratio∗ ≤1
Q Quality factor [3, 5]
min [Ω]
Z out Min. output impedance (real part) ≥ 50

these plots that with an increase in gain below 15d B, NF increases slowly and after that
NF starts to increase significantly at high gain values. LNAs with high NF deteriorates
the performance of a receiver system significantly by transmitting input signals coupled
with large noise to other blocks. Therefore, the feasible region having low noise figure
and moderate gain can be considered to finalize the design.
Pareto fronts shown in Figs. 6, 7, and 8 are assessed qualitatively based on the
hypervolume quality indicator (as shown in Table 6). Fronts with higher hypervol-
ume represent better quality of optimized solutions in terms of diversity of solutions.
MHPSO and MHPSO-CD provide superior hypervolume for all three test circuits. In
the last row of Table 6, b/e/w presents the number of test functions where MHPSO
and/or MHPSO-CD (any one) are better, equal to, or worse than the other cited methods
with the significance level of 0.05, respectively. Proposed algorithms have produced
better Pareto fronts for analog and RF circuits, in terms of diversity and density of
nondominated solutions in objective space.

5.3 Benchmark Results and Analysis

In order to find out the competitiveness of MHPSO, it is compared with the state-
of-the-art algorithms i.e., NSGA-II, NSGA-II-DE, MOEA/D-DE, PESA-II, SPEA-II,
DMOPSO, OMOPSO, and SMPSO. The performance of algorithms can quantitatively
be analyzed by computing three quality metrics i.e., GD, IGD and HV.
Circuits, Systems, and Signal Processing

Fig. 6 Pareto fronts generated for Two-stage OpAmp with a NSGA-II, b NSGA-II-DE, c MOEA/D-DE, d
PESA-II, e SPEA-II, f DMOPSO, g OMOPSO, h SMPSO, i MHPSO, and j MHPSO-CD

Fig. 7 Pareto fronts generated for Folded cascode OpAmp with a NSGA-II, b NSGA-II-DE, c MOEA/D-
DE, d PESA-II, e SPEA-II, f DMOPSO, g OMOPSO, h SMPSO, i MHPSO, and j MHPSO-CD
Circuits, Systems, and Signal Processing

Fig. 8 Pareto fronts generated for LNA with a NSGA-II, b NSGA-II-DE, c MOEA/D-DE, d PESA-II, e
SPEA-II, f DMOPSO, g OMOPSO, h SMPSO, i MHPSO, and j MHPSO-CD

Table 6 Hypervolume values for analog circuit optimization problems

Algorithm Two-stage opamp Folded cascode opamp LNA

NSGA-II 0.6045 0.8371 0.6012


NSGA-II-DE 0.6980 0.8461 0.6421
MOEA/D-DE 0.6494 0.7021 0.7256
PESA-II 0.5996 0.7433 0.5982
SPEA-II 0.6079 0.7312 0.6085
DMOPSO 0.6081 0.6086 0.5912
OMOPSO 0.7080 0.7997 0.7134
SMPSO 0.8029 0.8200 0.7465
MHPSO 0.8345 0.8361 0.7496
MHPSO-CD 0.8556 0.8472 0.7782
b/e/w 8/0/0 8/0/0 8/0/0

The GD, IGD, and HV values for test problems, solved with different algorithms,
are listed in Table 7, where, the numerical values represent the mean values of 30 runs
of every algorithm for each problem. The best performance is highlighted for each
test problem. The comparison between proposed algorithms and peer algorithms is
represented by b/e/w, where b, e, and w represents the number of occasions, when
MHPSO and/or MHPSO-CD is performing better, equal, and worse than other the
algorithms, respectively. As evident from Table 7, GD values for MHPSO and/or
Table 7 GD, IGD and HV metrics standard test functions
Methods ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 SCH FON KUR BINH2 CEX OZY TNK

GD metric
NSGA-II 1.1517e−01 2.0507e−01 9.6721e−02 4.0117e+01 0.2416e+01 1.1460e−02 2.4156e−03 7.3262e−02 18.1617 20.0060 0.0119 2.7255
NSGA-II-DE 2.3897e−03 2.0976e−01 2.3542e−01 7.2736e−02 2.3307e−01 8.4757e−01 2.0977e−01 1.7926e+01 2.7162e−03 3.4408e−03 2.1562e−03 6.0288e−03
MOEA/D-DE 3.2241e−03 2.7419e−01 1.7304e−01 1.7458e−03 2.8336e−01 8.5561e−01 2.7198e−01 1.7886e+01 2.2786e−03 3.2128e−04 2.4124e−03 6.0346e−03
PESA-II 0.8401e−04 2.7438e−04 5.7949e−04 7.8818e−03 7.8691e−03 1.9175e−01 5.3821e−04 2.4408e−04 2.4896e−02 4.2277e−04 1.4202e−03 7.3899e−04
SPEA-II 3.4137e−03 7.7292e−03 1.3506e−03 1.9055e−01 1.1563e−01 4.3401e−01 2.6145e−04 2.4346e−04 3.2894e−02 2.5679e−04 1.7636e−03 5.9922e−04
DMOPSO 1.0490e−04 4.9559e−05 9.8536e−05 8.9505e−05 4.1801e−05 2.2700e−04 1.2825e−04 4.1136e−04 9.6670e−03 1.8696e−02 0.5470 0.0952
OMOPSO 2.0610e−04 9.0625e−05 2.3206e−04 5.8820e−01 2.6670e−02 2.2756e−04 1.1879e−04 6.9291e−04 0.0258 2.1128e−04 1.7938e−03 8.3382e−04
SMPSO 1.0256e−04 4.8441e−05 1.1824e−04 7.4796e−05 1.6137e−03 2.2366e−04 1.2841e−04 2.4206e−04 2.7278e−02 2.0802e−04 1.6630e−03 9.0512e−04
MHPSO 1.0330e−05 4.5602e−05 1.1728e−05 5.8226e−05 3.4366e−0 1.2925e−04 4.0207e−04 2.4428e−04 0.11194e−03 3.9590e−03 1.6015e−03 4.6015e−04
MHPSO-CD 1.9838e−05 4.3607e−05 4.1766e−05 8.0437e−05 5.0003e−04 1.1891e−04 2.6807e−04 1.8501e−04 0.2011e−04 1.8941e−04 1.0369e−03 4.5735e−04
b/e/w 8/0/0 8/0/0 8/0/0 8/0/0 8/0/0 8/0/0 4/0/4 8/0/0 6/0/0 6/0/0 6/0/0 6/0/0
IGD metric
NSGA-II 7.9356e−02 1.4510e−01 7.2018e−03 1.4407e+01 8.4401e−01 2.2365e−03 7.8359e−04 1.2120e−02 3.8746e−02 2.6953e−03 1.3635e−03 0.6128
NSGA-II-DE 1.8796e−03 1.6733e−01 2.4908e−01 5.7595e−02 1.8216e−01 0.1467e+01 1.2627e−01 1.8181e+01 3.1612e−04 3.4915e−04 1.4290e−03 7.8635e−02
MOEA/D-DE 5.4652e−03 2.2529e−01 2.1255e−01 1.2372e−03 2.5408e−01 0.2560e+01 1.7374e−01 1.8290e+01 3.9562e−04 3.4630e−04 1.6056e−03 7.1737e−03
PESA-II 2.9049e−03 2.3787e−04 2.4666e−04 5.8863e−03 5.8600e−04 2.1274e−02 1.0119e−03 3.1286e−04 2.1411e−04 3.4398e−04 1.0047e−03 9.0237e−04
Circuits, Systems, and Signal Processing
Table 7 continued

Methods ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 SCH FON KUR BINH2 CEX OZY TNK

SPEA-II 1.0155e−03 1.5281e−02 5.7333e−04 3.3890e−02 1.3288e−02 3.1697e−02 2.3973e−04 1.3727e−04 1.5887e−04 5.0045e−04 8.8447e−03 6.1796e−04
DMOPSO 1.5926e−04 1.4076e−04 2.8151e−04 1.6759e−04 1.3987e−04 1.3766e−03 2.0804e−04 2.2666e−04 1.2544e−04 8.2293e−03 1.2335e−02 8.0866e−02
OMOPSO 1.4202e−04 1.4369e−04 1.2177e−04 1.6728e−01 1.3526e−04 3.3847e−04 2.0799e−04 1.6395e−04 1.1443e−04 1.9562e−04 1.0112e−02 4.9077e−04
SMPSO 1.3510e−04 1.3807e−04 1.0258e−04 1.3529e−04 1.3442e−04 3.5439e−04 2.4157e−04 1.5296e−04 1.1615e−04 1.9699e−04 1.0219e−02 6.2999e−04
MHPSO 1.3050e−05 1.0899e−05 4.1827e−05 9.8683e−04 7.9775e−05 3.8836e−04 7.4135e−03 1.3913e−04 1.0712e−04 2.4649e−03 1.1682e−03 4.1682e−04
MHPSO-CD 1.2648e−05 1.0781e−05 3.1180e−05 7.0228e−04 4.2758e−04 2.7278e−04 2.8085e−02 1.6301e−04 1.0547e−04 1.2167e−04 1.8959e−04 4.0733e−04
b/e/w 8/0/0 8/0/0 8/0/0 8/0/0 8/0/0 8/0/0 4/0/4 7/0/1 6/0/0 6/0/0 6/0/0 6/0/0
Circuits, Systems, and Signal Processing

HV metric
NSGA-II 0.6575 0.3241 0.5131 0.6518 0.3773 0.6620 0.3079 0.3994 0.7249 0.7629 0.3765 0.3322
NSGA-II-DE 0.6362 0.4092 0.4856 0.4608 0.4679 0.6397 0.4092 0.4856 0.7287 0.7651 0.7472 0.4402
MOEA/D-DE 0.6644 0.3311 0.5161 0.6646 0.4047 0.7936 0.3155 0.4034 0.7239 0.7703 0.7243 0.4866
PESA-II 0.6313 0.3232 0.5037 0.5739 0.3761 0.7346 0.3045 0.3962 0.7215 0.7717 0.7066 0.3038
SPEA-II 0.6164 0.0528 0.4868 0.6577 0.0447 0.5504 0.3102 0.3996 0.7242 0.7752 0.7045 0.3047
DMOPSO 0.6615 0.3283 0.5136 0.6609 0.4013 0.8284 0.3118 0.3962 0.7283 0.8305 0.7567 0.7194
OMOPSO 0.6603 0.3278 0.5115 0.6547 0.4012 0.8299 0.3123 0.3972 0.7275 0.7759 0.7020 0.3060
SMPSO 0.6617 0.3286 0.5155 0.6615 0.4012 0.8208 0.3124 0.4001 0.7275 0.7756 0.6981 0.3047
MHPSO 0.6651 0.3321 0.7854 0.6658 0.2683 0.7265 0.4553 0.5219 0.8067 0.7457 0.7347 0.7303
MHPSO-CD 0.6675 0.4351 0.7865 0.6708 0.4816 0.8291 0.5131 0.5367 0.8267 0.8709 0.7821 0.7416
b/e/w 8/0/0 8/0/0 8/0/0 8/0/0 8/0/0 7/0/1 8/0/0 8/0/0 6/0/0 6/0/0 6/0/0 6/0/0
Circuits, Systems, and Signal Processing

MHPSO-CD are better for most of the test functions except FON which means that the
resulting Pareto fronts from MHPSO/MHPSO-CD are closer to the true Pareto fronts
(standard solutions set) than the fronts generated by rest of the algorithms except one
function. IGD metric indicates the closeness to the actual front as well as the diversity
of the obtained Pareto front. MHPSO is performing better for all functions except FON
and KUR. The Larger value of HV metric indicates better convergence and diversity of
obtained Pareto front. The results obtained for HV metric exhibits that the performance
of MHPSO is performing better for all the functions except SCH. It can be observed
from the experiments (analog circuits and test functions) that MHPSO/MHPSO-CD
provides superior Pareto fronts, which is supported by the values of quality metrics.

6 Conclusion

A new algorithm, based on the hybridization of PSO and SA, is presented to eval-
uate multi-objective optimization problems. An archive of nondominated solutions
is maintained during PSO followed by SA. Simulated annealing is applied on the
reduced solution space (solutions excluding the archive, populated by PSO) to update
the archive with more nondominated solutions, hence, improving the quality of Pareto
fronts in terms of diversity and density.
Further, an extension of this approach incorporating crowding distance is also pro-
posed. The application of crowding distance during archive maintenance improves the
diversity of obtained Pareto fronts. The proposed scheme is applied for the optimiza-
tion of two popular analog circuits and one RF circuit. Each circuit is tested for two
competitive objectives, i.e., gain—power consumption in case of two-stage and folded
cascode operational amplifiers, and gain—noise figure in case of LNA circuit.
The proposed methodology can be applied to real life multi-objective optimization
problems found in different technological and social scenarios. The application of
proposed approach can further be extended to analyze yield of the designed circuit,
and effect of process variation on the obtained Pareto front.

Declarations
Conflict of interest There is no conflict of interests/competing interests applicable to this manuscript

References
1. E. Alaybeyoğlu, F. Ugranli, Analog building blocks optimization for low-pass filter of IEEE 802.11n
wireless lan: Ota and ccii. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 40(11), 2199–2210
(2021)
2. P.E. Allen, D.R. Holberg, CMOS Analog Circuit Design. Oxford series in electrical and computer
engineering. Oxford University Press (2002)
3. G. Alpaydin, S. Balkir, G. Dundar, An evolutionary approach to automatic synthesis of high-
performance analog integrated circuits. IEEE Trans. Evol. Comput. 7(3), 240–252 (2003)
4. Analog design automation, inc
5. P. Andreani, H. Sjöland, Noise optimization of an inductively degenerated cmos low noise amplifier.
IEEE Trans. Circuits Syst. II: Analog Digit. Signal Process. 48(9), 835–841 (2001)
Circuits, Systems, and Signal Processing

6. A. Arab, A. Alfi, An adaptive gradient descent-based local search in memetic algorithm applied to
optimal controller design. Inf. Sci. 299, 117–142 (2015)
7. M. Barros, J. Guilherme, N. Horta, Analog circuits optimization based on evolutionary computation
techniques. Integr. VLSI J. 43(1), 136–155 (2010)
8. Cadence inc., products: composer, virtuoso, diva, neocircuit, neocell, ultrasim, ncsim
9. V. Chankong, Y.Y. Haimes, Multiobjective Decision Making: Theory and Methodology (Courier Dover
Publications, Mineola, 2008)
10. M. Clerc, J. Kennedy, The particle swarm-explosion, stability, and convergence in a multidimensional
complex space. IEEE Trans. Evol. Comput. 6(1), 58–73 (2002)
11. C.A. Coello Coello, G.B. Lamont, D.A. Van Veldhuizen, Evolutionary Algorithms for Solving
Multi-Objective Problems (Genetic and Evolutionary Computation). Springer-Verlag New York, Inc.,
Secaucus, NJ, USA (2006)
12. D.W. Corne, N.R. Jerram, J.D. Knowles, M.J. Oates, Pesa-ii: region-based selection in evolutionary
multiobjective optimization. In Proceedings of the 3rd Annual Conference on Genetic and Evolutionary
Computation, GECCO’01, pp. 283–290, San Francisco, CA, USA . Morgan Kaufmann Publishers Inc
(2001)
13. P.-E. Danielsson, Euclidean distance mapping. Comput. Graph. Image Process. 14(3), 227–248 (1980)
14. F. De Bernardinis, M.I Jordan, A. Sangiovanni Vincentelli, Support vector machines for analog circuit
performance representation. In Proceedings of the 40th Annual Design Automation Conference, pp.
964–969. ACM (2003)
15. K. Deb, Multi-Objective Optimization Using Evolutionary Algorithms (Wiley, Hoboken, 2001)
16. K. Deb, A. Pratap, T.A.M.T. Sameer Agarwal, Meyarivan, A fast and elitist multiobjective genetic
algorithm: Nsga-ii. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002)
17. M. del Mar Hershenson, S.P Boyd, T.H Lee, Gpcad: a tool for cmos op-amp synthesis. In Proceedings
of the 1998 IEEE/ACM International Conference on Computer-Aided Design, pp. 296–303. ACM
(1998)
18. S. Du, H. Liu, Q. Hong, C. Wang, A surrogate-based parallel optimization of analog circuits using
multi-acquisition functions. AEU Int. J. Electron. Commun. 146, 154105 (2022)
19. S. Du, H. Liu, H. Yin, Yu. Fei, J. Li, A local surrogate-based parallel optimization for analog circuits.
AEU Int. J. Electron. Commun. 134, 153667 (2021)
20. M. Fakhfakh, Y. Cooren, A. Sallem, M. Loulou, P. Siarry, Analog circuit design optimization through
the particle swarm optimization technique. Analog Integr. Circuits Signal Process. 63(1), 71–82 (2010)
21. M. Fakhfakh, Y. Cooren, A. Sallem, M. Loulou, P. Siarry, Analog circuit design optimization through
the particle swarm optimization technique. Analog Integr. Circuits Signal Process. 63(1), 71–82 (2010)
22. C.M. Fonseca, P.J. Fleming, Multiobjective optimization and multiple constraint handling with evolu-
tionary algorithms. ii. Application example. IEEE Trans. Syst. Man Cybernet. Part A: Syst. Humans
28(1), 38–47 (1998)
23. P. Gentili, F. Piazza, A. Uncini, Efficient genetic algorithm design for power-of-two fir filters. In
Acoustics, Speech, and Signal Processing, 1995. ICASSP-95., 1995 International Conference on, vol. 2,
pp. 1268–1271. IEEE (1995)
24. G. Gielen, H. Walscharts, W. Sansen, Analog circuit design optimization based on symbolic simulation
and simulated annealing. In Solid-State Circuits Conference, 1989. ESSCIRC ’89. Proceedings of the
15th European, pp. 252–255 (1989)
25. G.G.E. Gielen, R.A. Rutenbar, Computer-aided design of analog and mixed-signal integrated circuits.
Proc. IEEE 88(12), 1825–1854 (2000)
26. E.H. Graeb, Analog Design Centering and Sizing., 1st edn. (Springer Publishing Company, Incorpo-
rated, 2007)
27. I. Guerra-Gómez, E. Tlelo-Cuautle, T. McConaghy, G. Gielen, Optimizing current conveyors by evo-
lutionary algorithms including differential evolution. In: Electronics, Circuits, and Systems. ICECS
2009. 16th IEEE International Conference on, pp. 259–262. IEEE (2009)
28. D. Joshi, S. Dash, U. Agarwal, R. Bhattacharjee, G. Trivedi, Analog circuit optimization based on
hybrid particle swarm optimization. In: 2015 International Conference on Computational Science and
Computational Intelligence (CSCI), pp. 164–169 (2015)
29. N. Karaboga, A. Kalinli, D. Karaboga, Designing digital iir filters using ant colony optimisation
algorithm. Eng. Appl. Artif. Intell. 17(3), 301–309 (2004)
30. N. Karaboga, B. Cetinkaya, Design of digital fir filters using differential evolution algorithm. Circuits
Syst. Signal Process. 25(5), 649–660 (2006)
Circuits, Systems, and Signal Processing

31. J. Kennedy, R. Eberhart, Particle swarm optimization. In: Neural Networks, 1995. Proceedings., IEEE
International Conference on, vol. 4, pp. 1942–1948 (1995)
32. J. Kennedy, R.C. Eberhart, Swarm Intelligence (Morgan Kaufmann Publishers Inc., San Francisco,
CA, USA, 2001)
33. J. Knowles, D. Corne, The pareto archived evolution strategy: a new baseline algorithm for pareto mul-
tiobjective optimisation. In: Evolutionary Computation, 1999. CEC 99. Proceedings of the Congress
on, vol. 1, pp. 98–105. IEEE (1999)
34. J.R. Koza, F.H. Bennett, D. Andre, A.M. Keane, F. Dunlap, Automated synthesis of analog electrical
circuits by means of genetic programming.IEEE Trans. Evol. Comput., 1(2), 109–128 (1997)
35. W. Kruiskamp, D. Leenaerts, Darwin: Cmos opamp synthesis by means of a genetic algorithm. In:
Proceedings of the 32Nd Annual ACM/IEEE Design Automation Conference, DAC ’95, pp. 433–438,
New York, NY, USA, ACM (1995)
36. F. Kursawe, A variant of evolution strategies for vector optimization. In: Proceedings of the 1st Work-
shop on Parallel Problem Solving from Nature, PPSN I, pp. 193–197, London, UK, UK, Springer-Verlag
(1991)
37. A. Lberni, M.A. Marktani, A. Ahaitouf, A. Ahaitouf, Efficient butterfly inspired optimization algorithm
for analog circuits design. Microelectron. J. 113, 105078 (2021)
38. A. Lberni, A. Sallem, M.A. Marktani, N. Masmoudi, A. Ahaitouf, A. Ahaitouf, Influence of the
operating regimes of mos transistors on the sizing and optimization of cmos analog integrated circuits.
AEU Int. J. Electron. Commun. 143, 154023 (2022)
39. H. Li, Q. Zhang, Multiobjective optimization problems with complicated pareto sets, moea/d and
nsga-ii. IEEE Trans. Evol. Comput. 13(2), 284–302 (2009)
40. H. Liu, A. Singhee, R.A. Rutenbar, L. Richard, Carley Remembrance of circuits past: macromodeling
by data mining in large analog design spaces. In: Proceedings of the 39th Annual Design Automation
Conference, pp. 437–442. ACM (2002)
41. S. Martínez, C.A. Coello Coello, A multi-objective particle swarm optimizer based on decomposition.
In GECCO, pp. 69–76 (2011)
42. Y. Mousavi, A. Alfi, A memetic algorithm applied to trajectory control by tuning of fractional order
proportional-integral-derivative controllers. Appl. Soft Comput. 36, 599–617 (2015)
43. A.J. Nebro, J.J. Durillo, J. García-Nieto, C.A. Coello Coello, F. Luna, E. Alba, Smpso: a new pso-
based metaheuristic for multi-objective optimization. In: 2009 IEEE Symposium on Computational
Intelligence in Multicriteria Decision-Making (MCDM 2009), pp. 66–73. IEEE Press (2009)
44. Z.-Q. Ning, T. Mouthaan, H. Wallinga, Seas: a simulated evolution approach for analog circuit synthe-
sis. In: Custom Integrated Circuits Conference, 1991., Proceedings of the IEEE 1991, pp. 5–2. IEEE
(1991)
45. A. Osyczka, S. Kundu, A new method to solve generalized multicriteria optimization problems using
the simple genetic algorithm. Struct. Optim. 10(2), 94–99 (1995)
46. S.M.A. Pahnehkolaei, A. Alfi, A. Sadollah, J.H. Kim, Gradient-based water cycle algorithm with
evaporation rate applied to chaos suppression. Appl. Soft Comput. 53, 420–440 (2017)
47. A. Papadimitriou, M. Bucher, Multi-objective low-noise amplifier optimization using analytical model
and genetic computation. Circuits Syst. Signal Process. 36(12), 4963–4993 (2017)
48. M. Reyes, C.A. Coello Coello, Improving pso-based multi-objective optimization using crowding,
mutation and -dominance, in Third International Conference on Evolutionary MultiCriterion Opti-
mization, EMO 2005, volume 3410 of LNCS. ed. by C.A. Coello, A. Hernández, E. Zitler (Springer,
2005), pp.509–519
49. J.D. Schaffer, Multiple objective optimization with vector evaluated genetic algorithms. In: Proceedings
of the 1st International Conference on Genetic Algorithms, pp. 93–100, Hillsdale, NJ, USA, L. Erlbaum
Associates Inc (1985)
50. G. Stehr, H. Graeb, K. Antreich, Feasibility regions and their significance to the hierarchical optimiza-
tion of analog and mixed-signal systems. In: Modeling, Simulation, and Optimization of Integrated
Circuits, pp. 167–184. Springer (2003)
51. M. Tanaka, H. Watanabe, Y. Furukawa, T. Tanino, Ga-based decision support system for multicriteria
optimization. In: 1995 IEEE International Conference on Systems, Man and Cybernetics. Intelligent
Systems for the 21st Century, vol. 2, pp. 1556–1561 (1995)
52. S.K. Tiwary, P.K. Tiwary, R.A. Rutenbar, Generation of yield-aware pareto surfaces for hierarchical
circuit design space exploration. In: Proceedings of the 43rd Annual Design Automation Conference,
pp. 31–36. ACM (2006)
Circuits, Systems, and Signal Processing

53. G.I. Tombak, ŞN. Güzelhan, E. Afacan, G. Dündar, Simulated annealing assisted nsga-iii-based multi-
objective analog ic sizing tool. Integration 85, 48–56 (2022)
54. J.-T. Tsai, J.-H. Chou, T.-K. Liu, Optimal design of digital iir filters by using hybrid Taguchi genetic
algorithm. IEEE Trans. Ind. Electron. 53(3), 867–879 (2006)
55. T.O. Weber, W.A.M. Van Noije, Analog design synthesis method using simulated annealing and particle
swarm optimization. In: Proceedings of the 24th symposium on Integrated circuits and systems design,
pp. 85–90. ACM (2011)
56. P. Xiaoying, Z. Jing, C. Hao, C. Xuejing, H. Kaikai, A differential evolution-based hybrid nsga-ii
for multi-objective optimization. In: 2015 IEEE 7th International Conference on Cybernetics and
Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM),
pp. 81–86 (2015)
57. E. Zitzler, K. Deb, L. Thiele, Comparison of multiobjective evolutionary algorithms: empirical results.
Evol. Comput. 8(2), 173–195 (2000)
58. E. Zitzler, M. Laumanns, L. Thiele, SPEA2: improving the strength pareto evolutionary algorithm.
Technical Report 103, Computer Engineering and Networks Laboratory (TIK), Swiss Federal Institute
of Technology (ETH), Zurich, Switzerland (2001)
59. J.B. Zydallis, D.A. Van Veldhuizen, G.B Lamont, A Statistical Comparison of Multiobjective Evo-
lutionary Algorithms Including the MOMGA-II, pp. 226–240. Springer Berlin Heidelberg, Berlin,
Heidelberg (2001)

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under
a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted
manuscript version of this article is solely governed by the terms of such publishing agreement and applicable
law.

You might also like