You are on page 1of 4

A Comparison of Particle Swarm Optimization and Genetic Algorithms for a

Phased Array Synlhesis Problem

D. W. Boeringer- and D. H. Wemer


Department of Electrical Engineering
mc Pennsylvania State University
University Park, PA 16802

Abstrsct - Panicle swarm optimization is a recenlly invented high-performance


optimizer that possesses several highly desirable attributes, including the fact that the
basic algorithm is vely easy to understand and implement. It is similar in some ways to
gcnetic algorithms or evolutionary algorithms, but generally rcquires only a few lines of
code. In this paper, a particle swarm optimizer is implemented and compared to a genetic
algorilhm for phased array synthesis of a far ficld sidelobe notch, using amplitude-only,
phase-only. and complex tapering. The results show that some optimization scenarios arc
bctter suited to one method versus the other (i.e. panicle swarm optimization performs
bctter in some cases while gcnelic algorithms perfom better in others), which implies
that thc two methods traverse the problem hyperspace diffcrcntly. Although simple, the
panicle swam optimircr shows good possibilities for clectromagnetic optimization.

Particle Swarm Optimization. Panicle swarm optimizalion originated in shldies of


synchronous bird flocking and fish schooling, when the investigarora realized that their
simulation algorithms posscssed an optimizing characteristic [I]. Consider the
Simultaneous optimization of N variables. A collection or swarm of paniclcs is defined,
where cach panicle's position in the N-dimensional problcm space corresponds to 8
candidatc solution to the optimization problem. Each of these particle positions is scored
to obtain a scalar cost based on how well it mlves the problem. These panicles then fly
through the N-dimensional problcm space subject lo both deterministic and stochastic
updatc mles to new positions, which are subsequently scored. As the panicles traverse
the problem hyperspace, each panicle rcmembers its own personal best position that it
has ever found,called its local best. Each panicle also knows the best position found by
any panicle in the swam, called the global best. On successive iterations, particles arc
'tugged' toward these prior best solutions with linear spring-like attraction forcer.
Overshoot and undershoot combined with stochastic adjustment explore regions
throughout thc problem hyperspace, cventually senling down near a goad solution.
Elcctromagnetics researchers arc just beginning to cxplore potential applications of this
relatively new optimistion technique [2], [3].

Consider the flowchan shown in Figure 1. In this study, the algorithm stans by
initialiring a group of 50 particles, with random positions in a 200dimensional
hyperspace, constrained between zero and one in each dimension. A set of random
velocities is also initialized, with values between -I and 1. For our phased array synthesis
problem, the 200-dimensional position vector is mapped to LOO amplitude and LOO phase
weights and the concsponding far field pattem is scorcd for each paniclc. Afler all these
panicles are scored, Ihc bcst performer is identified as the initial global best.

NOW the particles arc flown lhrough the problem hyperspace for 10,000 iterations
(500,000 cos1 function evalualionr), using stochastic velocily and position update rulcr.
For each particle in turn, the first step is to update the vclocity separately along each

02003 IEEE
0-7803-7846-6103/SI7.00 181
dimension according to the velocity update rule given in Figure I. Three components
typically contribute to thc new velocity. Thc First part ('incnia,' 'momentum,' or 'habit')
is proponional to the old velocity and is the Tendency of thc panicle Io continue in the
same dircction it has becn traveling. This component is scalcd by the on st ant w. taken
hcic to have a Y ~ U Cof 0.4. The second component of the vclocily update equation
('memory,' 'self-knowledge,' 'nostalgia,' or 'remembrance') is a lincar attraction toward
the best position ever found by the given particle (often called local beat), scaled by the
product of a fixed constaint 4, and a random number r m d ( ) bctween zero and onc. Note
that a different random numbcr is used for each dimension. The third component of the
velocity updatc equation ('cooperation,' 'social knowledge,' 'group knowledge,' or
'shared information') is a linear attraction toward thc bcst position found by any pvrticlc
(oflen called global best), scalcd by the product of a fixed constant +2 and another random
number rorrd() between z.cm and one chosen scparately for each dimension. Following
common practice, we sct 4 1 ~= +.= 2. Thesc paradigms allow pallicles to profit both from
their own discoveries as ,well as the collective knowledge of thc entire swarm. mixing
local and global information
uniquely for each paniclc.

The algorithm limits thc


resulting "CIOcity to r
maximum R M S value by the
r u l e given in Figure I , where
hcre uRMSml= 0.3. Although
most researchers apply a
clipping velocity limit V,,,,
along each dimcnsion
separately, the RMS limit
used here seems to work
quite well, perhaps bccauee
it prcscrves the direction of
thc updated velocity. Like
thc inertia wcight w, large
values Of V h or YRMS,"."
encourage global searching
while small values
encourage local searching.
Some rcscarchcrs find an
advantage in decrcasing the
velocity limit during the
optimization proccis, to
encourage local searching at
the end of thc optimization.
Next, the ncw position of the
panicle i s calculaicd by
adding the ncw vclocity
Y E C ~ O to~ the old partick
position vector, whcrc a unit
time step is assumcd. If any
dimension of the new
position vector is less than
aero or more than one, it is clipped to stay within this range. This new position is mapped
to amplitude and phase weights and the new resulting far field pattern is scored. If this
position has the best score that this particle has found so far, then it is retained as the
local best memory for this particle. If h addition, this position has the best score of any
particle so far, then it is M h e r retained as the global best for the entire swarm.The final
array distribution is taken as the global best scoring particle afier a specified number of
iterations is reached. The current implementation of the panicle swarm optimizer uses
asynchronous updates, where the global best is updated afier each particle, rather than
waiting until all the particles have been scored. This makcs the most c ~ r r e n global
t best
information known to all pallicks as soon as it becomes available. This seems to enhance
performance over synchronous updates, although asynchronous updating is not a
conducive to parallel processing as synchronous updates.

Genetic Algorithm. For


comparison, an initial
population of 50 random
candidate anay distributions
is generated and evolved for
t0,nnn generations (500,000
cost function cvaluations)
using a genetic algorithm
[4]. More specifically, 25
pairs of parents are chosen
by tomamem at cach
itwafim, where cach parent
is selectcd as the best of 5
randomly chosen From the
best 10 candidates. Each of
these 25 couples produccs
two children using two
crossovers. These 50
children have mutations
applicd, where the allowed
distance away from the
original value decreases as
the optimization proceeds,
and the children are then
scored for fitness. The bcst- Figure 1. Performance comparison between panicle
scoring individuals survive swarm optimizer (left) and genetic algorithm (right), 5
to the next gcncradon, and trials each. Top row: amplitude-anly synthesis. Middle
the orocess reorats. The final row: phase-only synthesis. Bottom row: complex
m a ) dktribuhon is the most
fit individual aftcr the specified number of iterations.

Phased Array Notch Synthesis. To compare the two algorithms, complex phased a m y
weights are synthesized Io meet a far field sidelobe requircment including a 60 dB notch
on one side. This might be desired if the appmximalc direction to an interference SOUTCC
W E ~ Cknown. Thc antenna i s B linear phased a m y of one hundred half-wavelcngth spaced
radiators. For convenience and afcompafiron, both the particle swarm optimizer and
the genetic algorithm are designed to provide vectors of 200 real valucs between zero and
one. The first 100 values are scalcd to the dcsired amplitudc range and the second 100 are

I83
scaled to the desired phase
range. The cost measure to
he minimized i s the sum of
the squares of the CXECSS
far field magnitude above
the specified sidelohe
envelope. This penalizes
sidelohes above the
envelope, while neither
penalty nor reward i s
given for sidelobes below
the specification. The main
beam is excluded from the
cost function. The lower .80! I
the cost, the mom fir the 0.5 0 0.5 1
Smlol
array distribution. The Complex Smlhesb using Genetlc Algonfhm
optimization i s considered
fully converged if the cost
function reaches -40 dB. I"
RCSUIIS.As seen in Figurc
2, the particle swarm
algorithm is competitive
with thc genetic algorithm.
For amplitude-only
synthesis, the particle
swarm algorithm performs
better early on, hut can he
outpaced by the genetic I, I I'
algorithm at higher 4.5 0 0.5 1
Sinle)
iteration. For phase-only
synthesis, algorithms Figure 3. Sidelohe notch synthesis results using particle
haw
~~ ~
comnarahle
~~~~
swarm optimizer (tap) and genetic algorithm (bottom).
performance, while the genetic algorithm slightly outperforms particle swarm
optimization for complex synthesis. Figure 3 shows typical complex synthesis results for
the particle Swam optimizer and for the gcnctic algorithm, with the final amplitude and
phase weights inset. Although the sidelobe envelope constraint allows variations in lhe
sidelohe arrangcment, the quality of the final results i s similar regardless of the optimizer
used, despite the relative simplicity of the particle swarm optimizer.

[ I ] J. Kennedy and R. C. Eherhart, Swarm Inlelligence. San Francisco, CA: Morgan


Kaufmann, 2001.
[Z] G. Ciuprina, D. loan, and 1. Mumteanu, "Use of intelligent-panicle swam
optimization in clectromagnetics," IEEE Trons.Map.,vol. 38, no. 2, pp. 1037-1040,
Mar. zflflz.
[3] J. Robinson, S. Sinton, and Y . Rahmat-Samii, "Particle swarm, genetic algorithm, and
thcir hybrids: optimimion of a profiled corrugated hom antenna," in 2002 IEEE
AnrennasPropogaf.Soc. Inf. Symp. Dig., vol. I , pp. 314-317.
[4] D. W. Boeringci, D. W. Machuga, and D. H. Werner, .'Synthesis of phased array
amplitude weights for Stationaly ridelohe envclopes using genetic algorithms," in
2002 IEEE Anfennos Propagal. Soc. Inl. Symp. Dig., vol. 4, pp. 684-687.

I84

You might also like