INFORMATION TO USERS
This manuscript has been reproduced from the microfilm master. UMI
films the text directly from the original or copy submitted. Thus, some
thesis and dissertation copies are in typewriter face, while others may be
from any type of computer printer.
‘The quality of this reproduction is dependent upon the quality of the
copy submitted. Broken or indistinct print, colored or poor quality
illustrations and photographs, print bleedthrough, substandard margins,
and improper alignment can adversely affect reproduction.
In the unlikely event that the author did not send UMI a complete
manuscript and there are missing pages, these will be noted. Also, if
unauthorized copyright material had to be removed, a note will indicate
the deletion.
Oversize materials (e.g, maps, drawings, charts) are reproduced by
sectioning the original, beginning at the upper left-hand comer and
continuing from left to right in equal sections with small overlaps. Each
original is also photographed in one exposure and is included in reduced
form at the back of the book.
Photographs included in the original manuscript have been reproduced
xerographically in this copy. Higher quality 6” x 9” black and white
photographic prints are available for any photographs or illustrations
appearing in this copy for an additional charge. Contact UMI directly to
order.
UMI
ABel & Howell Information Company
300 North Zeeb Road, Ann Arbor MI 48106-1346 USA
313/761-4700 800/521-0600UNIVERSITY OF CALIFORNIA
Los Angeles
GENETIC ALGORITHMS IN ENGINEERING ELECTROMAGNETICS
A dissertation submitted in partial satisfaction of the requirements for
the degree Doctor of Philosophy in
Electrical Engineering
by
J. Michael Johnson
1997‘UMI Number: 9807671
LUMI Microform 9807671
Copyright 1997, by UMI Company. All rights reserved.
‘This microform edition is protected against unauthorized
‘copying under Title 17, United States Code.
UMI
300 North Zeeb Road
‘Ann Arbor, MI 48103‘The dissertation of J. Michael Johnson is approved
YortRan:
Nathaniel Grossman.
ALL
Tatsuo Itoh
oa
Yahya Rahmat-Samii, Committee Chair
Kung Yao
University of California, Los Angeles
1997DEDICATION
I dedicate this work to the two most important people in my life. To my mother,
MaryLou S. Johnson, without whom I would, of course, have no life and to my wife,
Elizabeth E. Leitereg without whom my life would have no meaning.TABLE OF CONTENTS
DEDICATION
‘TABLE OF CONTENTS:
LIST OF FIGURES
ACKNOWLEDGMENTS:
LIST OF SYMBOLS AND ACRONYMS
Vira
PUBLICATIONS AND PRESENTATIONS
ABSTRACT OF THE DISSERTATION
Buletecs
INTRODUCTION
|
GENETIC ALGORITHM OVERVIEW
2.1 GLOBAL Vs. LOCAL OPTIMIZATION 14
2.2 CONCEPTUAL DEVELOPMENT OF THE GENETIC ALGORITHM. 18
CHAPTER 3 22
A SIMPLE GENETIC ALGORITHM.
3.1 CHROMOSOMES AND PARAMETER CODING 24
3.2 SELECTION STRATEGIES 26
3.2.1 POPULATION DECIMATION 27
3.2.2 PROPORTIONATE SELECTION 29
3.2.3 TOURNAMENT SELECTION 31
3.3. GA OPERATORS 33
3.4 ‘FITNESS FUNCTIONS 35
3.5 EXTENSIONS AND IMPROVEMENTS TO THE SIMPLE GA OPTIMIZER 36
3.5.1 ELITIST STRATEGY 37
3.5.2 FITNESS SCALING 38
3.5.3. STEADY STATE GENETIC ALGORITHMS 403.5.4 THE TRAVELING SALESMAN PROBLEM (TSP) 42
CHAPTER 4 46
STEP BY STEP IMPLEMENTATION: A CASE STUDY
4.1 A More DIFFICULT PROBLEM 54
4.2 A COMPARISON OF STEADY STATE AND GENERATIONAL GA.
PERFORMANCE 56
CHAPTERS 0
ARRAY DESIGN USING GENETIC ALGORITHMS
5.1 Low SIDELOBE LEVELS IN THINNED AND NON-UNIFORM ARRAYS 61
5.2 SHAPED BEAM ARRAYS 65
CHAPTER6 08
MULTI-OBJECTIVE ARRAY DESIGN WITH GA
6.1 PHASE CONTROLLED SWITCHED BEAM ARRAY DESIGN 16
6.1.1 PATTERN-B DESIGN FOLLOWED BY GA OPTIMIZATION To PRODUCE
PATTERN-A. 79
6.1.2 PATTERN-A DESIGN FOLLOWED BY GA OPTIMIZATION To PRODUCE
PATTERN-B 86
6.1.3 SIMULTANEOUS PATTERN A AND PATTERN B GA OPTIMIZATION 90
CHAPTER7 98
ARRAY BASED REFLECTOR DISTORTION COMPENSATION USING
GA OPTIMIZATION
7.1 DIFFRACTION ANALYSIS AND THE MULTI-BEAM ANTENNA
APPROACH 97
7.2, MULTI-BEAM APPROACH TO ARRAY OPTIMIZATION 99
7.3 COMPENSATION OF DISTORTION USING GA OPTIMIZATION 102
7.4 CONCLUSION 106CHAPTERS
WIRELESS NETWORK LAYOUT USING GENETIC ALGORITHMS
8.1 GA NETWORK OpTIMIZATION FITNESS FUNCTION 113,
8.2 RESULTS FOR SIMPLE NETWORKS 1S
8.3. A More REALISTIC Case 11g
CHAPTER 9 121
GA/MOM: PATCH ANTENNA DESIGNS USING GA AND METHOD OF
MOMENTS
9.1 GA/MoM METHoDoLocy 124
9.2. WIDEBAND PATCH ANTENNA DESIGN EXAMPLE. 130
9.3. DUALBAND PATCH ANTENNA DESIGN 135
9.4 FDTD SIMULATION OF OPTIMIZED PATCH ANTENNA 139
9.5 MEASURED RESULTS OF OPTIMIZED PATCH ANTENNA 142
9.6 _ INVESTIGATION OF DUAL BAND PATCH OPERATION 144
CHAPTERIO, tc
CONCLUSIONS AND SUGESTIONS FOR FUTURE WORK
10.1 CoNcLusIons 150
10.2 GUIDELINES FOR THE APPLICATION OF GA TO ELECTROMAGNETIC
PROBLEMS 152
10.3 SUGGESTIONS FoR FUTURE WORK 155
APPENDIX A057
DOPLH-CHEBYSHEV ARRAY SYNTHESIS
APPENDIX BO
SURFACE PATCH METHOD OF MOMENTS AND THE RAO, WILTON
GLISSON BASSIS FUNCTION
B.1 THE ELECTRIC FIELD INTEGRAL EQUATION 162
B.2_ THE METHOD OF MoMENTS 167
B.3_ THERAO, WILTON BASIS FUNCTION 169APPENDIX 0
A BRIEF INTRODUCTION TO FINITE DIFFERENCE TIME DOMAIN
(FDTD) ANALYSIS
BIBLIOG! 180LIST OF FIGURES
Figure 1-1: Real life optimization of a broadband patch antenna design might
include finite numbers of available dielectric substrates, material costs
and material weights as well as the usual paramters such as size of the
atch and location of the probe. GA optmizers are robust enough to
handle such diverse parameter sets. 3
Figure 1-2: Linear array GA optimization problem. 7
Figure 1-3: Antenna beam specifications for the patterns required in the
example of chapter 6. 7
Figure 1-4: GA is used to compensate for distorted, array fed reflector
antennas by adjusting the array excitation coefficients in chapter 7.
(Figure courtesy of R. Hoferer). 8
Figure 1-5: The problem explored in chapter 8 involves the automatic GA
based layout of wireless, point to point networks. 9
Figure 1-6: The problem explored in chapter 9 involves GA optimization of
the shape of a patch antenna. 10
Figure 2-1: Important concepts and terminology associated with genetic
algorithms. 12
Figure 2-2: The major optimization methods can be classified as either global
or local techniques. IS
Figure 2-3: Genetic Algorithm (GA) optimization compared qualitatively to
conjugate gradient (CG) and random search (Random). 16
Figure 2-4: Holland's schemata concept. 18
Figure 2-5: Some schema of a 7-bit binary chromosome. 19
Figure 3-1: Block diagram of a simple genetic algorithm optimizer. 23
Figure 3-2: Chromosomes can be entirely encoded (usually binary), floating
point or mixed binary and floating point. Generally a parameter is
equivalent to a gene.
Figure 3-3: Proportionate Selection represented as a roulette wheel with
spaces on the wheel proportional to an individual's relative fitness. ____30
Figure 3-4: Tournament Selection, where N individuals are selected at
random from the population and the individual with the highest fitness
in the selected sub-population becomes the selected individual. 31
25Figure 3-5: The single point crossover operator redistributes the
characteristics of a pair of parents and creates a pair of children. 34
Figure 3-6: The mutation operator randomly modifies elements within the
chromosome. $$$
Figure 3-7: The Partially Matched Crossover operator redistributes the
characteristics of a pair of parents and creates a pair of children while
preserving a valid tour.
Figure 3-8: The TSP mutation operator randomly selects and interchanges a
air of elements within the chromosome. 45
Figure 4-1: A plot of the solution surface for the 2D magnitude SINC function
example problem of equation (4.1) which has a global maximum at (x =
3.0, y = 3.0)..
Figure 4-2: The genetic algorithm optimization converges to the optimal
solution (fimess = 1.0) much faster, on average, and with smaller
variance over a number of successive, independent trials than the
random search method.
Figure 4-3: Distribution of a GA population at the first generation and after
100 generations showing the convergence of the population toward the
peak and the general alignment of the population along planes that
intersect the peak.
Figure 4-4: Distribution of a GA population on a bi-modal surface at the
initialization point and after 100 generations once again showing
alignment of the population on planes intersecting the peak. 55
Figure 4-5: Comparison between average progress towards convergence for
generational and steady state replacement schemes showing better
convergence for the steady state schemes. S7
Figure 4-6: Comparison between average progress towards convergence for
roulette wheel versus tournament selection in both generational and
43
47
steady state replacement schemes. 59
Figure 5-1: Minimum sidelobe through array decimation using GA
optimization. 62
Figure 5-2: Results of thinning an array to achieve an optimal low sidelobe
Pattern using GA.. 63
Figure 5-3: Results of thinning an array to achieve an optimal low sidelobe
pattern using GA.
64
Figure 5-4: Geometry for the flat top beam design case, 66Figure 5-5: Excitation coefficient magnitudes for continuously variable GA
optimized flat-top beam design..
Figure 5-6: Excitation coefficient phases for continuously variable GA
optimized flat-top beam design. 69
Figure 5-7: Pattern for the continuously variable case optimization results.____71
Figure 5-8: Pattern for case-2 GA optimization results where only 4
amplitude sates and 2 phase states were used during the optimization
69
process. 72
Figure 5-9: Close-up plot of the main beam region of case-2 showing
difference in the ripple when compared with a pattern produced by
rounded off excitation coefficients of case-1. 73
Figure 6-1: Pattern specifications for the widebeam pattern, pattern-A._____77
Figure 6-2: Pattern specifications for the narrowbeam pattern, pattern-B.__78
Figure 6-3: Amplitude of excitation coefficients of an 18 element array as
determined by the Dolph-Chebyshev synthesis procedure. 81
Figure 6-4: Array pattern for an 18 element array using the excitation
coefficients determined by the Dolph-Chebyshev synthesis procedure. ____ 82
Figure 6-5: Four array design produced by GA optimization using the fitness
function of equation (6.9) where both excitation coefficient amplitude
‘and phase were optimized. 83
Figure 6-6: Some example results of phase only GA optimization of the
Dolph-Chebyshev design using the fitness function of equation (6.9). 85
Figure 6-7: Pattern produced by application of Orchard synthesis to an 18
element array with pattern-A specifications. 86
Figure 6-8: Four example patterns produced by GA optimization of the 18-
element array using the pattern-B fitness function. 88
Figure 6-9: Pattern produced by GA phase only optimization of excitation
coefficients produced by the Orchard synthesis, = 89
Figure 6-10: Widebeam pattern for best results from combined GA
optimization. 9
Figure 6-11: Narrowbeam pattern for best results from combined GA
optimization. 92
Figure 6-12: Amplitude distribution of the best result obtained from
combined GA optimization. 93Figure 6-13: Phase distribution of the best result obtained from combined GA
optimization. ee
Figure 7-1: Reflector geometery and array feed layout.___97
Figure 7-2: Plot of reflector surface distortion term Fy.__________99
Figure 7-3: Performance of an undistorted 20 A. center fed parabolic
reflector with various feed configurations. 102
Figure 7-4: Pattern of the undistorted reflector with pre-distortion optimized
excitation coefficients compared to the 10 dB taper single feed pattern. __ 104
Figure 7-5: Performance of an undistorted 20 A.center fed parabolic
reflector with an optimized array feed, performance with distortion
added and the recovered performance with GA optimization. ___105
Figure 8-1: Typical Network Structures used in Backbone Data Distribution
Networks where a) depicts an open path backbone and b) shows a ring
backbone. Ring backbones and open path backbones differ only in that
the there is no connection between the first and last elements in an open
backbone.
Figure 8-2: Example of Two Encoding Schemes for the Path Length
Minimization Problem. I
Figure 8-3: Example 5 bit Chromosome Encoding Scheme for the
Maximization of SNR at the nodes.
Figure 8-4: A set of 10 nodes a) arranged in a rectangle were interconnected
using b) the TSP measure of optimum and c) the SNR measure of
optimum.
Figure 8-5: a) Given a set of 14 nodes at known locations, similar results to
those obtained with 10 nodes for the SNR measure are obtained. b)
Including sidelobes in the antenna pattern modifies the pattern
somewhat. 7
Figure 8-6: GA optimized minimum distance network layout for 20 nodes.. U8
Figure 8-7: GA optimized maximum SNR network layout for 20 nodes. 9
Figure 9-1: Block diagram of GA/MoM Direct Matrix Manipulation
approach.
Figure 9-2: Comparison the MoM block diagram and the block diagram of
GA/MoM Direct Matrix Manipulation approach,_____ 127
Figure 9-3: Matrix fill time and matrix inversion time comparison. 128
108
12
116
125Figure 9-4: Calculated S11 for patch antenna before and after GA/MoM
optimization.
Figure 9-5: Patch antenna with simple wire feed above an infinite ground
plane before (a) and after (b) GA optimization. ___ 3B
Figure 9-6: Double density MoM discretized version of optimized patch
antenna to test for convergence.
Figure 9-7: Double density MoM discretized version of optimized patch
antenna to test for convergence. A
Figure 9-8: Calculated S11 for the dual band patch antenna before and after
GA/MoM optimization. 137
Figure 9-9: Dual band optimized patch antenna structure shown over an
infinite ground plane.
Figure 9-10: Principle plane radiation patterns of the original patch antenna
and the dual band optimized patch. a) f = 0 degrees plane cut; b) f = 90
degrees plane. 138
Figure 9-11: Dual band optimized patch antenna structure shown FDTD
discretization grid.
Figure 9-12: Comparison of MoM and FDTD predictions of \S11\ for the
optimized dual band patch antenna. __ 4
Figure 9-13: Comparison of FDTD predictions of \S1\ for the optimized
dual band patch antenna with two different dielectrics. 141
Figure 9-14: Photograph of the prototype dual band GA optimized antenna. ___142
Figure 9-15: Measured and MoM modeled |S11\ performance for the GA
optimized dual band antenna. 143
Figure 9-16: Several substructures of the dual band patch antenna are
readily identifiable including (a) the main body without holes and (b)
the main body with holes, (c) the main body with the central tab, (d) the
main body with the two small tabs, (¢) the main body plus the large and
small tabs and (f) the complete dual band antenna with tabs and holes. _144
Figure 9-17: Magnitude 511 results for the main body with and without holes
compared to that of the complete dual band patch antenna.___145
Figure 9-18: Magnitude S11 results for the main body with the central tab
and the main body with the smaller tabs both without holes as shown in
Figure 9-16 (c) and (d). 146
131
137
139Figure 9-19: Magnitude S11 results for the main body with both the central
tab the smaller tabs without holes as shown in Figure 9-16(e) and the
results of the complete dual band patch structure shown in Figure
9-16(9). 147
Figure 9-20: The GA/MoM results can be improved by slightly lengthening
the tabs and leaving out the holes. 148
Figure B-1: A volume V surrounded by surfaces S,...Sy 164
Figure B-1: Key geometrical parameters for the RWG basis function. 170
Figure C-1: The H-fields are calculated from the surrounding E-fields. 175
Figure C-2: The Yee cell approach to spatial discretization in FDTD. 176ACKNOWLEDGMENTS
‘The author wishes acknowledge the support, patience, love and understanding of my
wife, Elizabeth Leitereg. She put her life on hold for 6 years so that I could pursue
this research and the Ph. D. degree that it represents and I am most grateful.
The author would also like to acknowledge Yahya Rahmat-Samii for his counsel and
invaluable advice during the course of this research. Many times the author thought a
dead-end had been reached only to have a comment or suggestion from Professor
Rahmat-Samii lead to a significant step forward. The author would also like to
express his appreciation to Nathaniel Grossman, Tatsuo Itoh, and Kung Yao for
agreeing to serve on his thesis committee.
The author would like to express sincere appreciation to Richard Hodges for the use of
his JPOHM method of moments code, Dah-Weih Daun for the use of his DUAL
PO/PTD code, and Michael Jensen for the use of his FDTD code. These codes where
developed as part of the developer’s Ph.D. research at UCLA. It was only the
availability of these codes that made possible the extensive range of application of GA
that is presented herein.
Finally the author would like to acknowledge his lab colleagues, especially Joseph
Colbum and Robert Hoferer, who have over the course of this study provided much
encouragement, helpful criticism and even a figure or two.LIST OF SYMBOLS AND ACRONYMS
The definition of a given symbol or acronym is intended to apply to the entire
document except when followed by a reference to a particular chapter or appendix.
RCS Radar Cross Section
GA Genetic Algorithm
POPTD Physical Optics/Physical Theory of Diffraction.
Peras probability of crossover
Patton probability of mutation
ca conjugate gradient optimization
k alphabet cardinality (chapter 2)
L length of chromosome
m(H,t) number of a particular schemata H contained in the
Population at time t
H a given schemata
fO fitness function
F population average fitness
oH) schema order (chapter 2)
Siparent,) fitness of ith parent
Pretecion probability of selection
O(n) complexity order n
P probability
Max number larger than the largest expected fimess value
‘fitness OF frcated fitness value after scaling
abe user selected integer used in scaling (chapter 3)
f imermediate fimess
f raw fitness
Socr deviation from average fitnesstraveling salesman problem
Partially matched crossover
parameter value, a real number (chapter 4)
‘maximum value in parameter range (chapter 4)
minimum value in parameter range (chapter 4)
number of bits in chromosome (chapter 4)
binary bit in the nth position in the chromosome
(chapter 4)
wavelength
average directivity (chapter 5)
directivity at ith angle
root mean squared
array factor (chapter 5)
fitness weight value, integer
wave number (2n/A)
array scan angle
array factor
nth excitation coefficient in an array
number of elements in array equals 2N+1 for odd
number of elements and 2N for even number of elements
(chapter 6)
amplitude of array excitation coefficient (chapter 6)
phase of array excitation coefficient (chapter 6)
decibels
sidelobe level factor (equation 6.5)
fitness weight value
‘maximum sidelobe level in dB (chapter 6)
‘maximum ripple in 4B (chapter 6)
arbitrary number chosen by user
combined fitnessFa
LO
EO
SNR
WGN
Pry
2O
RO)
GO
GO
PO
NO
SNR)
NEC
DMM
aN
LJ
ress
LonZme
x
focal length of undistorted reflector
distortion term (equation 7.2)
linear operator
E-field radiated by reflector array combination
(chapter 7)
signal to noise ratio
minimum spanning tree
white gaussian noise
power received by the ith node from the jth node
power received
radial distance
gain of transmitting antenna
gain of receiving antenna
power transmitted
noise at mth node
signal to noise ratio at mth node
method of moments
‘numerical electromagnetic code
direct matrix manipulation
finite difference time domain
electric field integral equation
impedance matrix of MoM
‘current vector (aka solution) of MoM
inverse of Z
source term of MoM
modified Z or ¥ matrices
block sub-matrix of Z that is modified by DMM
block sub-matrices of Z that are modified by DMM.
block sub-matrix of Z that is not modified by DMM.3
Somme
asarvaqgeeFY
x
Gr)
cAG)
Ty
Ax, Ay, Az
Pe
voltage standing wave ratio
relative permittivity (a.k.a. dielectric constant)
mth order Chebyshev polynomial
vector electric field
vector magnetic field
‘magnetic current
vector electric current
permeability
permittivity
radian frequency
curl operator
divergence operator
electric charge
‘magnetic charge (appendix B)
volume
surface
scalar free space Green’s function
surface normal for $
radial distance
gradient operator
basis function for the MoM (appendix B)
triangle attached to the plus side of the nth edge
area of nth triangle (appendix B)
magnetic conductivity
electric conductivity
FDTD grid spacing in the x, y and z direction
respectively (appendix C)
partial derivative operator w.r.t. xSeptember 3, 1957
June 1980
1980-1984
June 1983
1984.
1984-1986
1986-1988
1988-1989
1989-1994
1994-Present
VITA
Born, El Paso, Texas
B.S. Biological Sciences
University of California, Irvine
Member of Technical Staff
‘Advanced Hybrid Technology Department
Solid State Products Division
Hughes Aircraft Company
MS.E. Electrical Engineering
University of California, Irvine
Major Field: Control Systems
Best Paper Award
International Society for Hybrid
Microelectronics Symposium, Dallas, TX,
Research Engineer, Sr.
Communication Systems Hardware Department
Communications Systems Engineering
Space Systems Division
Lockheed Missiles and Space Company
Member of Technical Staff
Systems Engineering
Condor Systems, Inc.
Member of Technical Staff
Deskin Research Group
Sr. Staff Systems Engineer
Systems Engineering
Condor Systems, Inc.
President and Founder
North Shores AssociatesPUBLICATIONS AND PRESENTATIONS
Johnson, J. M., et al, "Solar Powered Hybrid Sensors and their Application”,
Proceedings of the International Society for Hybrid Microelectronics,
Dallas, TX, 1984.
Johnson, J. M. and Y. Rahmat-Samii "Genetic Algorithm Optimization and its
Application to Antenna Design,” [EEE Antennas and Propagation Society
International Symposium Digest, Seattle, WA, 1994, vol. 1, pp. 326-329.
Johnson, J. M. and Y. Rahmat-Samii “Genetic Algorithm Optimization of Wireless
‘Communication Networks,” IEEE Antennas and Propagation Society International
Symposium Digest, Newport Beach, CA, June 18-23, 1995, vol. 4, pp. 1964-1967.
Johnson, J. M. and Y. Rahmat-Samii “Genetic Algorithm Optimization for Aerospace
Electromagnetic Design and Analysis” IEEE Aerospace Applications Conference
Proceedings, Snowmass at Aspen, CO, 1996 vol. 1, pp. 87-102.
Johnson, J. M. and Y. Rahmat-Samii "Multiple Region FDTD (MR/FDTD) and its
Application to Microwave Analysis and Modeling” IEEE MTT-S 1996 Symposium
Digest, San Francisco, CA, June 17-21, 1996 vol. 3, pp. 1475-1478.
Johnson, J. M. and Y. Rahmat-Samii "Wideband Tab Monopole Antenna Array for
Wireless Adaptive and Mobile Information Systems Application” IEEE Antennas and
Propagation Society International Symposium Digest, Baltimore, MD, July 21-26,
1996, vol. 1, pp. 718-721.
Johnson, J. M. and Y. Rahmat-Samii, "Genetic Algorithms in Electromagnetics” IEEE
Antennas and Propagation Society International Symposium Digest, Baltimore, MD,
July 21-26, 1996, vol. 2, pp. 1480-1483.
Johnson, J. M. and ¥. Rahmat-Samii, “The Tab Monopole,” IEEE Trans. Antennas.
and Propagat., vol. 45, no. 1, January 1997, pp.187-188.
Johnson, J. M. and Y. Rahmat-Samii, “MR/FDTD: Multiple Region Finite Difference
Time Domain Method,” Microwave and Optical Technology Letters, vol. 14, no. 2,
February 1997, pp. 101-105.
Johnson, J. M. and Y. Rahmat-Samii, “A Novel Integration of Genetic Algorithms
and Method of Moments (GA/MoM) for Antenna Design,” Applied Computational
Electromagnetic Society Symposium, March 17-21, 1997.Johnson, J. M. and Y. Rahmat-Samii, “Genetic Algorithms and Method of Moments
(GA/MoM): A Novel Integration for Antenna Design,” IEEE Antennas and
Propagation Society International Symposium Digest, Montreal, Canada, July 14-18,
1997.
Johnson, J. M. and Y. Rahmat-Samii "Multiple Region Finite Difference Time
Domain (MR/EDTD)" USNC/URSI Radio Science Meeting, Baltimore, MD, July 21-
26, 1996, URSI Digest, pg. 120.
Elsherbeni, A, J. M. Johnson and Y. Rahmat-Samii, “Impedance Characterization
using Finite Difference Time Domain Analysis,” 1997 Progress in Electromagnetics
Research Symposium,
6-9 January 1997.ABSTRACT OF THE DISSERTATION
GENETIC ALGORITHMS IN
ENGINEERING ELECTROMAGNETICS
By
J. Michael Johnson
Doctor of Philosophy in Electrical Engineering
University of California, Los Angeles, 1997
Professor Yahya Rahmat-Samii, Chair
‘The application of modern electromagnetic theory in real world radiation and
scattering problems, especially antenna problems, often requires or at least benefits
from the use of optimization. Common features of many real world electromagnetic
optimization problems include the involvement of large numbers of parameters either
continuous or discretized, constraints in the parameters, and a desire to locate global
maxima or minima. The origins of the work presented in this dissertation can be
found in the question “Is there a better way to solve modem, real-world
electromagnetic design problems?”. This dissertation focuses on a relatively new
approach to optimization called the genetic algorithm (GA) in an attempt to answer
‘yes to this question. Genetic algorithms are robust, stochastic based search methods
that can handle the common characteristics of electromagnetic optimization problems
that are not readily handled by other traditional optimization methods. The goal ofthis dissertation is to explore genetic algorithms and their application to a variety of
electromagnetic, particularly antenna, and related problems. An overview of GAs is
presented and the relationship between traditional optimization techniques and GA is
discussed. Step by step implementation aspects of GA are detailed by way of a
numerical example. The overview and step by step implementation discussion is
followed by a presentation of several electromagnetic problems to which GA has been
applied and has proved useful. The applications include the use of GA optimization
for thinned and shaped beam linear arrays, multi-objective array optimization,
reflector distortion compensation by feed array optimization, wireless network layout
optimization, and patch antenna design optimization. Throughout the work
summarized in this dissertation, a consistent theme was the coupling of GA
optimization to traditional, high accuracy electromagnetic modeling and simulation
methodologies. This coupling is most completely realized in the coupling of GA and
the method of moments for patch antenna design optimization. Where appropriate,
physical prototypes of GA produced designs were manufactured and tested to validate
the design results. In general, Genetic algorithm optimization is shown to be
robustness and suitable for optimizing a broad class of problems of interest to the
electromagnetic community.Chapter 1
INTRODUCTION
‘The application of modern electromagnetic theory to radiation and scattering problems
often cither requires, or at least benefits from, the use of optimization. Among the
typical problems requiring optimization are shaped reflector antenna design [1], target
image reconstruction (2], and layered material, anti-reflective coating design for low
radar cross section (RCS) [3]. Other problems, such as antenna array beam pattem
shaping [4,5], while sometimes solvable without optimization, are often more readily
handled using optimization particularly when one is faced with realization constraints
imposed by manufacturing considerations or environmental factors.
Electromagnetic optimization problems generally involve a large number of
parameters. The parameters can be either continuous, discrete or both and often
include constraints in allowable values. The goal of the optimization is to find a
solution that represents a global maximum or minimum. In addition, the solution
domain of electromagnetic optimization problems often have non-differentiable and/or
discontinuous regions and often utilize approximations or models of the true
electromagnetic phenomena to conserve computational resources. Thesecharacteristics sorely test the capabilities of many of the traditional optimization
techniques and often require hybridization of traditional optimization methods if these
‘methods are to be applied at all.
This dissertation focuses on a relatively new approach to optimization called the
genetic algorithm (GA). Genetic algorithms are robust, stochastic based search
methods that can handle the common characteristics of electromagnetic optimization
problems that are not readily handled by other traditional optimization methods. The
goal of this dissertation is to explore genetic algorithms and their application to a
variety of electromagnetic and electromagnetic related problems. Chapter 2 gives an
overview of the genetic algorithm and discusses the relationship of GA optimization to
other traditional optimization methods. Chapter 3 describes a simple genetic
algorithm and provides some of the details of its implementation and use. Chapter 4
presents a couple of case studies of the use of GAs to find the maximums of several
‘two dimensional surfaces. Chapter 4 is intended to acquaint the reader with the GA
concepts as used in practice as well as to attempt to substantiate the claim that GA
‘optimization is applicable to electromagnetic problems. Finally, Chapters 5-9 discuss
the application of GA optimization to electromagnetic problems. The examples in
Chapters 5-9 are intended to illustrate the wide applicability of GA optimization, to
address some of the peculiarities of the application of GA in electromagnetic
problems, and to emphasize some decisions and choices faced by the user when
considering whether to use GA optimization.The applications examined in Chapters 5-9 for the most part represent ground
breaking, novel applications of GA and range across a broad spectrum of
electromagnetically related design situations.
Figure 1-1: Real life optimization of a broadband patch antenna design might include finite
‘umbers of available dielectric substrates, material cots and material weights as well as
the usual paramters such a size of the patch and location of the probe. GA optmizers are
robust enough to handle such diverse parameter sts.
Before beginning the presentation of GAs it may be helpful to consider what kind of
problems might benefit from GAs optimization. A good prototypical problem for GA
optimization is illustrated in Figure 1-1 where the problem is to design a broad band
patch antenna. Parameters that are usually included in this type of optimization
problem include the location of the feed probe, the width and length of the patch(es)
and the height of the patch(es) above the ground plane(s). In addition, it may be
desirable to include constraints on the available dielectric materials, both in terms of
thickness and dielectric constants, tolerance limits on the patch size and probelocation, constraints on the weight of the final design, and possibly even cost
constraints for the final production model. Given the large number of parameters and
the unavoidable mixture of discrete and continuous parameters involved in this
problem it is virtually impossible to use traditional optimization methods. GA
optimizers, on the other hand, can readily handle such a disparate set of optimization
Parameters. The rest of this dissertation attempts to show why GAs would work well
for optimization based design in electromagnetics.
1.1 HISTORICAL PERSPECTIVE
Prior to beginning the discussion of the concepts of genetic algorithms and their
application to solving electromagnetic problems a brief introduction to the history of
genetic algorithms is warranted. The genetic algorithm is a stochastic search
Procedure modeled on the Darwinian concepts of natural selection and evolution. In
the genetic algorithm, a set or population of potential solutions is caused to evolve
toward a global optimal solution. The GA utilizes simple recombination and mutation
of existing solution characteristics and evolution is the result of selective pressure
exerted by fitness based selection.
The ideas of GA where developed out of attempts in the 1960's and early 1970's to
produce artificial systems that could adapt to changing situations. As a class of
algorithms, GA belongs to a larger group known as evolutionary methods that alsoinclude such things as simulated annealing. An excellent overview of GA is presented
in (6).
While a number of notable examples of earlier attempts to develop GA exist, the
concept of the genetic algorithm was first formalized by Holland [7] and his students.
GA was later extended to functional optimization by De Jong [8] and others.
The use of GA as an optimizer for solving electromagnetic problems first appears in
the literature in the early 1990's. In 1993, Michielssen applied GA to the design by
optimization of broadband microwave absorbers [3]. Since the appearance of that
paper, GA optimization has found, and is continuing to find, numerous applications to
real world problems in engineering and computer science.
To date, GA optimization has been applied successfully to a wide variety of
electromagnetic problems. As noted above, GA optimization has been used
successfully in the design of broadband microwave absorbers [3,9,10,11,12,13,14,15].
A large number of examples of the synthesis of antenna arrays using GA optimization
have been reported. GA optimization has been used successfully to produced thinned
or decimated arrays [4,16,17,18,19,] and to produce shaped beams [5,20,21,22,23,24,
25,26,27,28,29,30,31,32,]. Sidelobe level control [33,34,35] and adaptive nulling in
array using GA optimization have been explored as well (36,37,38,39].
GA optimization has been used for the design of wire antennas of various forms
including the design of electrically loaded wire antennas [40,41,42,43,44,45] and thedirect design of wire antennas using GA [46,47,48,49]. GA optimization has also
been used in the design of frequency selective surfaces [50,51,52,53], radar target
recognition, parameter extraction and backscattering problems [2,54,55,56,57,58,
59,60,61,62,63,64], and magnetics design [65,66,67,68,69,70,71,72,73,74,75,16,77,
78,79,80]. In addition, GA optimization has found applications in wireless network
layout [81,82], optical coating design [83], waveguide design [84], microwave filter
design [85], and the design of patch/printed antennas (86,87,88]. Even perfectly
matched layers in finite difference time domain (FDTD) have been optimized with
GA [89].
Six applications of GAs to electromagnetic problems are described in more detail
below and in the chapters that follow. The examples provide a representative picture
of the range of genetic algorithm optimization applications possible in
electromagnetics. The selection of the particular examples included in chapters 5-9 is
intended to demonstrate the breadth of the applicability of GA optimization to
electromagnetics.
1.2 GA OPTIMIZATION EXAMPLES
The application of GA optimization to the design of antenna arrays is explored in
chapters 5-6. In chapter 5 two principle applications are addressed: array decimation
and shaped beam design through optimization. Figure 1-2 depicts the array
configuration used for these problems.Figure 1-2: Linear array GA optimization problem.
The array decimation problem uses the GA optimizer to selectively remove array
elements from a uniformly excited array with the goal of reducing sidelobes. The
shaped beam design problem employs the GA optimizer to select the amplitude and
phase of the array excitation coefficients with the goal of achieving a flat top beam
design. The ability of GA to work with highly discrete sets of parameter values is also
demonstrated.
A more difficult array synthesis problem is undertaken in chapter 6. Chapter 6
expands on the array design theme with a look at a more difficult, multi-objective
ee
oe
Figure 1-3: Antenna beam specifications for the patterns required in the example of
chapter 6.oe}. * e
Figure 1-4: GA is used to compensate for distorted, array fed reflector antennas by
adjusting the array excitation coefficients in chapter 7.
(Figure courtesy of R. Hoferer)
array design problem. In the example of Chapter 6 the GA optimizer is used to
simultaneously synthesis a pair of antenna patterns the specifications of which are
depicted in Figure 1-3. Several different approaches are compared.
Chapter 7 explores the user of GA optimization for reflector distortion compensation
using 2D feed arrays. In this work, the GA optimizer is used to select the array
clement amplitude and phase excitation coefficients such that the effect of the reflector
distortion is effectively canceled by the action of the array. This idea is illustrated in
Figure 1-4. In this work, the ability of GA to work with two-dimensional arrays is
demonstrated, as are methods for efficiently integrating GA optimization and physical
optics/physical theory of diffraction (PO/PTD) analysis methods.
In chapter 8 the use of GA optimization for the layout of wireless networks is
investigated. The problem addressed is how to optimally connect a set of nodes to
form a network. Figure 1-5 illustrates the problem. Chapter 8 emphasizes the abilitythe GA to handle NP-complete problem types that arise in this class of problems.
Chapter 8 also utilizes modified GA operators designed to enhance performance in the
traveling salesman problem indicative of the network layout problem.
Finally, chapter 9 describes the linking of GA and method of moments for the purpose
of developing patch antenna designs. A novel direct matrix manipulation (DMM)
technique is introduced and employed in this work. The kind of problem that is
addressed, illustrated in Figure 1-6, involves the design of sub-wavelength patch
antennas with either broadband or dual band performance. The design is achieved by
the selective removal of metalization under the control of the GA. The method of
moments is used as a means for calculating the performance of the various patch
‘@What's the Best Layout?
‘¢What Defines Best?
(13524)
Ss w2354
Wn
12345)
Figure 1-5: The problem explored in chapter 8 involves the automatic GA based layout of
‘wireless, point to point networks,Figure 1-6: The problem explored in chapter 9 involves GA optimization of the shape of a
patch antenna.
designs under consideration. An example of a broadband patch antenna design and a
example dual band design are presented. The dual band design was manufactured and
the measured results are compared to those of the design model.
10Chapter 2
GENETIC ALGORITHM OVERVIEW
‘As noted above, genetic algorithm (GA) optimizers are robust, stochastic search
methods modeled on the principles and concepts of natural selection and evolution.
As an optimizer, the powerful heuristic of GAs is effective at solving complex,
combinatorial and related problems. GA optimizers are particularly effective when
the goal is to find an approximate global maximum in high dimension, multi-modal
function domain in a near optimal manner. The ability of the GA to perform in
difficult optimization problems is further enhanced when the problem can be cast in a
combinatorial form.
GA optimization borrows from the natural world in a number of ways. Some
important terminology and concepts of GA optimizers are presented in Figure 2-1.
The following summarizes many of the important concepts, many of which are
‘expanded upon and formalized in later chapters.
Populations and Chromosomes: in GA based optimizations a set of trial solutions is
assembled as a population. The parameter set representing each trial solution or
uindividual is coded to form a string or chromosome and each individual is assigned a
fitness value by evaluation of the objective function. Chromosomes can be binary
strings, strings of real parameters, or combinations. The objective function is to only
direct link between the GA optimizer and the physical problem.
Parents: following the initialization process in which a population is created, pairs of
individuals are selected (with replacement) from the population in a probabilistic
manner weighted by their relative fiess and designated as parents. In a typical
selection scheme, modeled as a weighted roulette wheel, each individual in the
population is assigned space on the roulette wheel proportional to the individual's
Parent
Child
‘¢Chromosome
Fitness
stot tia aoe ett
member of the current generation
Ph
member of the next generation rar
via
successively created populations
(GA iterations)
coded form of atrial solution
‘vector (string) consisting of genes
made of alleles
positive number assigned to an a
individoal representing a measure of
‘goodness
Figure 2-1: Important concepts and terminology associated with genetic algorithms.
12relative fimess. The wheel is spun each time a parent is required. Individuals with the
largest spaces on the whee! have the greatest chance of being selected and, therefore,
the greatest probability of passing on their characteristics to the next generation.
Children: a pair of offspring, or children, are then generated from the selected pair of
parents by the application of simple stochastic operators. The principle operators are
crossover and mutation. Crossover occurs with a probability of poms
(yp. 06-08) and involves the random selection of a crossover site(s) and the
combining of the two parent's genetic information. Specifically in single point
crossover, child | receives the chromosomal sub-string that precedes the cross-site in
parent 1 and the sub-string following the cross-site in parent 2. Child 2 gets the
remaining genetic information not given to child 1. The wo children produced share
the characteristics of the parents as a result of this recombination operator. Other
recombination operators are sometimes used but crossover is the most important.
Recombination (¢.g., crossover) and selection are the principle way that evolution
occurs in a GA optimization.
Mutation: ‘mutation, is a mechanism for introducing new, unexplored points into the
GA optimizer’s search domain. Mutation introduces the genetic material that is not
Present in the current population. The term “present” refers to genetic material or
sequences that are in the population either by direct representation or in terms of
possible recombinations of existing material. Genetically, mutation randomly changes
13the genetic makeup of the population. Mutation is much less important than crossover
and occurs with a probability Prato (typ- 0.05) which is much less than Peas.
New Generation: reproduction consisting of selection and recombination/mutation,
continues until a new generation is created to replace the original generation. Highly
fit individuals, or more precisely, highly fit characteristics, produce more copies of
themselves in subsequent generation resulting in a general drift of the population as a
whole towards an optimal solution point. The process can be terminated in several
ways: threshold on the best individual (i.e., the process stops when an individual has
an error less than some amount €), number of generations exceeds a pre-selected
value, or other some other appropriate criteria.
2.1 GLOBAL vs. LOCAL OPTIMIZATION
Before dealing with the specific details of genetic algorithms and its implementation,
it is useful to consider the relationship between GA optimizers and the more
traditional and possibly more familiar optimization methods. Genetic algorithms are
classified as global optimizers while more familiar, traditional techniques such as
conjugate gradient and the quasi-newton methods are classified as local techniques.
Figure 2-2 illustrates this relationship between the most commonly used optimization
methods.
14‘Global
Techniques
Random Walk
(aw)
[Simulated Annealing
(SA)
Davidon-Fietcher-Powell
(Nelder and Mead)
Figure 2-2: The major optimization methods can be classified as either global or local
techniques.
The distinction between local and global search or optimization techniques is that the
local techniques produce results that are highly dependent on the starting point or
initial guess while global methods are largely independent of the initial conditions. In
addition, local techniques tend to be tightly coupled to the solution domain. This tight
coupling enables the local methods to take advantage of the solution space
characteristics resulting in relatively fast convergence to a local maximum. However,
the tight solution space coupling also places constraints on the solution domain, such
as differentiability and/or continuity, constraints that can be hard or even impossible to
deal with in practice.
15Discontinuous Object Functions
Non-differentiable Object Functions
Convergence Rate
Figure 2-3: Genetic Algorithm (GA) optimization compared qualitatively to conjugate
‘gradient (CG) and random search (Random).
In particular, the popular Quasi-Newton techniques such as the Davidon-Fletcher-
Powel method have a direct dependence on the existence of at least a first derivative.
Conjugate gradient techniques are also either explicitly or implicitly dependent on the
existence of a derivative in the form of the gradient. The techniques such as the
‘gradient techniques also react badly to the presence of discontinuities in the surface
upon which the gradient is being evaluated.
The global techniques, on the other hand, are largely independent of and place few
constraints on the solution domain. This absence of constraints means that the global
methods are much more robust when faced with ill-behaved solution spaces. In
Particular, global techniques are much better at dealing with solution spaces having
discontinuities, constrained parameters, and/or a large number of dimensions with
‘many potential local maximums. ‘The downside to the global methods are that they
either cannot, or at least usually do not, take advantage of local solution space
16characteristics, such as gradients during the search process, resulting in generally
slower convergence than the local techniques.
In electromagnetic design problems, convergence rate is often not nearly as important
as getting a solution. Having found a solution, the ultimate goal is to find the best
solution or global maximum. In these applications, global methods are favored over
local methods. Global techniques either yield global or near global maximum instead
of local maximum and often find useful solutions where local techniques cannot.
Global methods are particularly useful when dealing with new problems in which the
nature of the solution space is relatively unknown.
Of the global techniques, genetic algorithms are particularly well suited for a broad
range of problems encountered in electromagnetics. Genetic algorithms are
considerably more efficient and provide much faster convergence than random walk
searches. In addition, they are easily programmed and readily implemented.
Unlike gradient searches, GA optimizers can readily handle discontinuous and non-
differentiable functions. GA optimizers are also well suited for constrained
optimization problems. A qualitative comparison between the major features of
conjugate gradient (CG), random walk (Random) and GA optimization is presented in
Figure 2-3. The emphasis in the GA method of optimization is on trying to strike a
balance between the robustness of random walk methods against the convergence rate
of a local search.
72.2, CONCEPTUAL DEVELOPMENT OF THE GENETIC ALGORITHM
During a GA optimization, a set of trial solutions or individuals is chosen and then
evolved toward an optimal solution under the selective pressure of the fitness function.
In general a GA optimizer must be able to perform six basic tasks:
1) Encode the solution parameters as genes
2) Create-a string of the genes to form a chromosome,
3) Initialize a starting population,
4) Evaluate and assign fitness values to individuals in the population,
5) Perform reproduction through the fitness weighted selection of individuals
from the population, and
6) Perform recombination and mutation to produce members of the next
generation.
In attempting to quantify the operation of GAs, Holland [7] introduced the concept of
similarity templates or schemata and proposed the fundamental theorem of genetic
algorithms, also known as the schema theorem. A schema is a similarity template that
1001101110 }
1001001010 | subset of
1111101110 [Chromosomes
1001111110)
1**1**1*10| Schema
Figure 2-4: Holland’s schemata concept.0111000
a] ee REO
eee gee
0111000
eee eee
Total number of Schema =(k+1)¢
Figure 2-5: Some schema of a 7-bit binary chromosome.
describes similarities between subsets of chromosomes in terms of similarities at
certain positions within the chromosome. Consider the set of 10 bit binary
chromosomes (1001101110, 1001001010, 1111101110, 1001111110}. Introducing
the character * into the set of possible states as the “don’t care” state (e.g. (0, 1} -> (0,
1, *}), the similarity template can be formed. Namely, a similarity template, or the
common terms between these four chromosomes listed above is (1**1**1*10}. This
is illustrated in Figure 2-4.
Any given chromosome actually represents a number of schema. In fact, it been
shown that the total number of schema is equal to (k+1)" where L is the length of the
chromosome and k is the cardinality of the alphabet. Figure 2-5 shows a binary
chromosome (k=2) of length 7 (L=7) expanded into a number of schema. Only four of
the possible 2.18 x 10° possible schema are shown in Figure 2-5.
19All of the chromosomes, or rather the individuals in the population represented by the
chromosomes, in the subset share the features represented by the schema. These
individuals are said to belong to the schema. In Holland’s approach to explaining GA,
the algorithm operates not individuals but rather on the schemata representing the
population.
‘The action of the GA over time can then be described by the fundamental theorem of
genetic algorithms given in equation (2.1).
m(Ht+1) 2 mH) £00. en
where m(H,t) is the number of examples of a particular schema H contained in the
Population at time ¢, f(H) is the fitness of schema H, and f is the average fitness of
the population, O(H) is the schema order or number of fixed positions in the schema
Hi, 6(H) is the defining length or length between first and last fixed position in the
schema, Perass is the probability of crossover and Pmuzuim is the probability of mutation.
Basically the theorem says that short, low order, schemata with above average fitness
will receive exponentially increasing trials in subsequent generations. Or, put another
way, small, highly fit schemata will increase in quantity with succeeding generations.
‘The concept of schema also help to explain the concept of implicit parallelism which,
in part, says that in a population of n structures, each generation results in the
processing of approximately n’ schemata.
20With these preliminaries out of the way, the details of how a GA optimizer is put
together and used to solve practical problems can be described in more detail.
aChapter 3
A SIMPLE GENETIC ALGORITHM
This chapter presents the basic elements of a genetic algorithm optimizer. It is
suggested that the reader reads this and the following chapter presenting a case study
of the use of this GA optimizer and then re-reads this section to fully appreciate the
GA optimizer presented here. A block diagram of a simple genetic algorithm
optimizer is presented in Figure 3-1. This GA optimizer and the description that
follows is modeled after that presented by Goldberg [90]. Some extensions to the
simple GA optimizer are presented at the end of this chapter.
The performance requirements of GA outlined in the last chapter lead to the existence
of three phases in a typical genetic algorithm optimization. ‘These phases are (1)
initiation, (2) reproduction, and (3) generation replacement.
Initiation in the typical genetic algorithm optimizer of Figure 3-1 consists of filling an
initial population with a predetermined number of encoded, usually randomly created
parameter strings or chromosomes. Each of these chromosomes represents an
individual prototype solution or simply an individual. The set of individuals is calledFigure 3-1: Block diagram of a simple genetic algorithm optimizer.
the current generation. Each individual in the set is assigned a fitness value by
evaluating the fitness function for each individual.
The reproduction phase produces a new generation from the current generation. In
reproduction, a pair of individuals is selected from the population to act as parents.
‘The parents undergo crossover and mutation thereby producing a pair of children.
Then these children are placed in the new generation. The selection, crossover, and
‘mutation operations are repeated until enough children have been generated to fill thenew generation. In some GA implementations this scheme is altered slightly.
Selection is used to fill the new generation and then crossover and mutation are
applied to the individuals in the new generation through random pairings. In either
case, the new generation replaces the old generation.
In the simple genetic algorithm presented here, the new generation is the same size as
and completely replaces the current generation. This is known as a generational
genetic algorithm. Altematively, in slightly more complicated GA implementations,
the new generation can be of a different size than its predecessor and/or there can be
overlap between the new generation and the old generation. GA methods having
overlapping populations called steady state genetic algorithms will be discussed in
more detail later in this chapter.
In the generation replacement phase, the new generation replaces the current
generation and fitness values are evaluated for and assigned to each of the new
individuals. The termination criterion is then evaluated and if it has not been met, the
reproduction process is repeated.
3.1 CHROMOSOMES AND PARAMETER CODING
Genetic algorithms operate on a coding of the parameters instead of the parameters
themselves. The coding is a mapping from the parameter space to the chromosome
space that transforms the set of parameters, usually consisting of real numbers, to a
finite length string. The coded parameters represented by genes in the chromosome
24enable the genetic algorithm to proceed in a manner that is independent of the
Parameters themselves and, therefore, independent of the solution space. Typically, a
binary coding is utilized, but any encoding from binary to continuous floating-point
number representations of the parameters can be used. Some of these concepts are
illustrated in Figure 3-2.
Generally, it has been shown that using a coding that has some underlying relevance to
the problem at hand produces best results. In addition, it is generally best to use the
shortest possible alphabet. At times the two rules can be at odds with each other.
Other times, binary coding which has the shortest useful alphabet is the natural coding.
Even when binary coding has little relevance to a given problem, it does yield very
simple GA operators and can be used profitably.
‘Binary Coxing
{1001,0110,110001,0010,10,0101)
or
‘Floating Point Vector
GEEEEEERETE I > — | pss Seaon e042 200.000,
or
Mixed
ay (10012041 eo.00)
Material Parameters (T,,€, Tas Eq} as
Physical Parameters (W,, Ly, Wp.)
nlc Cte, 6-1
Figure 3-2: Chromosomes can be entirely encoded (usually binary), floating point ot mixed
binary and floating point. Generally a parameter is equivalent toa gene,In a binary coding, the parameters are each represented by a finite length binary string.
‘The combination of all of the encoded parameters is a string of ones and zeros. The
coded parameters, represented by a set of I’s and 0's for binary coding, are analogous
to, and often referred to, as genes.
The genetic algorithm acts on the chromosome to cause an evolution towards an
optimal solution. Fitness values provide a measure of the goodness of a given
chromosome and, by direct association, the goodness of an individual within the
Population. Fitness evaluation involves decoding of the chromosome to produce the
Parameters that are associated with the individual followed by the evaluation of the
fitness function for the decoded parameters.
3.2. SELECTION STRATEGIES
Selection introduces the influence of the fitness function to the genetic algorithm
optimization process. Selection must utilize the fitness of a given individual since
fitness is the measure of the “goodness” of an individual. However, selection cannot
be based solely on choosing the best individual because the best individual may not be
very close to the optimal solution. Instead, some chance that relatively unfit
individuals are selected must be preserved to ensure that genes carried by these unfit
individuals are not “lost” prematurely from the population. In general, selection
involves a mechanism relating an individual's fitness to the average fitness of the
population.A number of selection strategies have been developed and utilized for genetic
algorithm optimization. These strategies are generally classified as either stochastic or
deterministic. Usually, selection results in the choice of parents for participation in the
reproduction process. Several of the more important and most widely used of these
selection strategies are discussed below.
3.2.1 Population Decimation
The simplest of the deterministic strategies is simply survival of the fittest of fitness
ranking with remove of the least fit. Since the population is decimated under this
scheme prior to being built back up through reproduction this scheme can be called
population decimation. In population decimation, individuals are ranked according to
their fitness values from largest to smallest. An arbitrary minimum fitness is chosen
as a cutoff point and any individual with a lower fitness than the minimum is removed
from the population. The remaining individuals are then used to produce the new
generation through random pairing and reproduction. The pairing and application of
GA reproduction operators are repeated until the new generation is filled.
Population decimation is classified as a deterministic strategy since the individuals
excluded from the population are chosen on the basis of a deterministic comparison
between their individual fitness values and an arbitrarily chosen threshold. A variation
on this theme is to produce a set of individuals through random pairing prior to
decimation, add these new individuals to the population and then decimate the
apopulation to return it to its original size. In either case, the influence of the fitness
function enters in to the process only during the deterministic decimation process.
The advantage of Population Decimation selection lies in its simplicity. All that is
required is to determine which individuals are fit enough to remain in the population
and then to provide a means for randomly pairing the individuals that survive the
decimation process.
‘The disadvantage of Population Decimation is that once an individual has been
removed from the population, any unique characteristic of the population possessed by
that individual is lost. This loss of diversity is a natural consequence of all successful
evolutionary strategies but in Population Decimation, the loss can, and often does,
‘occur long before the beneficial effects of a unique characteristic is recognized by the
evolutionary process. The normal action of the genetic algorithm is to combine good
individuals with characteristics to produce better. Unfortunately, good traits may not
be directly associated with the best fitness in the early stages of evolution toward an
optimal solution.
‘When a characteristic is removed from a population by decimation selection, the only
way that the characteristic may be reintroduced is through mutation. Mutation is used
in GAs as a means for exploring portions of the solution domain. In genetic terms,
mutation is a way of adding new genetic material, or characteristics, but it is a verypoor mechanism for adding specific genetic material. It is best to keep good genes or
‘g00d portions of genes whenever possible.
It is due to the serious detrimental effects of this premature loss of beneficial
characteristics that more sophisticated, stochastic selection techniques were
developed. It is a testament to GA’s robustness as an optimization technique that
Population Decimation works at all.
3.2.2. Proportionate Selection
‘The most popular, in terms of appearance in the literature, of the stochastic selection
strategies is Proportionate Selection, sometimes called roulette wheel selection [90].
In Proportionate Selection, individuals are selected based on a probability of selection
given in equation (3.1) where fparent,) is the fitness of the ith parent.
= L(parent,)
Pactecsion = Llparent,) GB.)
‘The probability of selecting an individual from the population is purely a function of
the relative fies of the individual. Individuals with high fitness will participate in
the production of the next generation more often than less fit individuals. This has the
same general effect as the removal of the least fit in Population Decimation, in that
characteristics associated with higher fitness are represented more in subsequent
generations. The distinction between Population Decimation and Proportionate
29