Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
OPTIMIZATION
by
Peter Joseph Bevelacqua
A Dissertation Presented in Partial Fulfillment
of the Requirements for the Degree
Doctor of Philosophy
ARIZONA STATE UNIVERSITY
May 2008
ANTENNA ARRAYS: PERFORMANCE LIMITS AND GEOMETRY
OPTIMIZATION
by
Peter Joseph Bevelacqua
has been approved
March 2008
Graduate Supervisory Committee:
Constantine A. Balanis, Chair
Joseph Palais
Abbas AbbaspourTamijani
James Aberle
Cihan Tepedelenlioglu
ACCEPTED BY THE GRADUATE COLLEGE
iii
ABSTRACT
The radiation pattern of an antenna array depends strongly on the weighting method
and the geometry of the array. Selection of the weights has received extensive attention,
primarily because the radiation pattern is a linear function of the weights. However, the
array geometry has received relatively little attention even though it also strongly
influences the radiation pattern. The reason for this is primarily due to the complex way
in which the geometry affects the radiation pattern. The main goal of this dissertation is
to determine methods of optimizing array geometries in antenna arrays.
An adaptive array with the goal of suppressing interference is investigated. It is
shown that the interference rejection capabilities of the antenna array depend upon its
geometry. The concept of an interference environment is introduced, which enables
optimization of an adaptive array based on the expected directions and power of the
interference. This enables the optimization to perform superior on average, instead of for
specific situations. An optimization problem is derived whose solution yields an optimal
array for suppressing interference. Optimal planar arrays are presented for varying
number of elements. It is shown that, on average, the optimal arrays increase the signal
tointerferenceplusnoise ratio (SINR) when compared to standard arrays.
Sidelobe level is an important metric used in antenna arrays, and depends on the
weights and positions in the array. A method of determining optimal sidelobe
minimizing weights is derived that holds for any linear array geometry, beamwidth,
antenna type and scan angle. The positions are then optimized simultaneously with the
optimal weights to determine the minimum possible sidelobe level in linear arrays.
iv
Results are presented for arrays of varying size, with different antenna elements, and for
distinct beamwidths and scan angles.
Minimizing sidelobes is then considered for 2D arrays. A method of determining
optimal weights in symmetric 2D arrays is derived for narrowband and wideband cases.
The positions are again simultaneously optimized with the weights to determine optimal
arrays, weights and sidelobe levels. This is done for arrays with varying number of
elements, beamwidths, bandwidths, and different antenna elements.
v
ACKNOWLEDGEMENTS
This work would not have been possible without my adviser, Dr. Constantine
Balanis. Dr. Balanis let me into his research group and gave me funding to research array
geometry, which ultimately led to the work presented here. His guidance and helpfulness
were paramount in producing successful research; without this the work would not have
been completed due to my youthful impatience and wavering trajectory.
I would like to thank Dr. Joseph Palais, Dr. AbbaspourTamijani, Dr. James
Aberle and Dr. Cihan Tepedelenlioglu for taking the time to be on my research
committee and for helpful suggestions along the way, specifically during my qualifying
and comprehensive examinations. Thanks also to Dr. Gang Qian and Dr. Andreas
Spanias for helping with my qualifying exam and in understanding Fourier Transforms.
My thanks go to my colleagues at ASU, including Zhiyong Huang, Victor
Kononov, Bo Yang, and Aron Cummings. The presence of these people increased the
quality of my research and life in various ways during my time at ASU.
This work is the culmination of approximately 10 years of college education. I am
indebted to many people for academic, personal, and financial assistance along the way.
Of these, I would like to thank Dr. Shira Broschat and Dr. John Schneider from
Washington State, Dr. Lee Boyce from Stanford, and my parents. Many other people
have in some way contributed to my education, but they are too numerous to list here.
vi
TABLE OF CONTENTS
Page
LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .ix
LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
CHAPTER
I. INTRODUCTION. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Literature Survey. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
II. FUNDAMENTAL CONCEPTS OF ANTENNA ARRAYS. . . . . . . . . . . . . . . . . . . 8
2.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
2.2 Antenna Characteristics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Wireless Communication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4 Antenna Arrays. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 Spatial Processing Using Antenna Arrays. . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.6 Aliasing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
III. WEIGHTING METHODS IN ANTENNA ARRAYS . . . . . . . . . . . . . . . . . . . . . . 24
3.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24
3.2 PhaseTapered Weights. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3 Schelkunoff Polynomial Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.4 DolphChebyshev Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.5 Minimum MeanSquare Error (MMSE) Weighting. . . . . . . . . . . . . . . . . . . . .29
3.6 The LMS Algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
IV. METHODS OF ANTENNA ARRAY GEOMETRY OPTIMIZATION. . . . . . . . 38
vii
CHAPTER Page
4.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38
4.2 Linear Programming. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.3 Convex Optimization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.4 Simulated Annealing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.5 Particle Swarm Optimization (PSO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
V. ARRAY GEOMETRY OPTIMIZATION FOR INTERFERENCE
SUPPRESSION. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .59
5.2 Interference Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.3 Optimization for Interference Suppression. . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.4 Planar Array with Uniform Interference at Constant Elevation. . . . . . . . . . . . 65
5.5 Using Simulated Annealing to Find an Optimal Array. . . . . . . . . . . . . . . . . . 68
5.6 Evaluating the Performance of Optimal Arrays. . . . . . . . . . . . . . . . . . . . . . . . 72
5.7 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
VI. MINIMUM SIDELOBE LEVELS FOR LINEAR ARRAYS. . . . . . . . . . . . . . . . . 78
6.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78
6.2 Problem Setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.3 Determination of Optimum Weights for an Arbitrary Linear Array. . . . . . . . 81
6.4 Broadside Linear Array. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .86
6.5 Array Scanned to 45 Degrees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
6.6 Array of Dipoles Scanned to Broadside. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.7 Mutual Coupling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
viii
CHAPTER Page
6.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
VII. MINIMIZING SIDELOBES IN PLANAR ARRAYS. . . . . . . . . . . . . . . . . . . . . . 102
7.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102
7.2 TwoDimensional Symmetric Arrays. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
7.3 SidelobeMinimizing Weights for TwoDimensional Arrays. . . . . . . . . . . . 105
7.4 SidelobeMinimizing Weights for Scanned TwoDimensional Arrays. . . . . 110
7.5 Symmetric Arrays of Omnidirectional Elements. . . . . . . . . . . . . . . . . . . . . . 115
7.6 Symmetric Arrays of Patch Antennas. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .122
7.7 Wideband Weighting Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .129
7.8 Optimal Wideband Arrays of Omnidirectional Elements. . . . . . . . . . . . . . . .133
7.9 Optimal Wideband Arrays of Patch Antennas. . . . . . . . . . . . . . . . . . . . . . . . 138
7.10 Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
VIII. SUMMARY, CONLUSIONS, AND FUTURE WORK. . . . . . . . . . . . . . . . . . . . 146
8.1 Summary and Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
8.2 Future Work. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
REFERENCES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
ix
LIST OF TABLES
Table Page
I. OUTPUT POWER COMPARISON AMONG DIFFERENT ARRAYS . . 73
II. RELATIVE SIR FOR CASE 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
III. RELATIVE SIR FOR CASE 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
IV. RELATIVE SIR FOR CASE 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
V. NUMBER OF PARTICLES REQUIRED FOR CONVERGENCE FOR
VARYING ARRAY SIZE WITH SIMULATION TIME. . . . . . . . . . . . . . 88
VI. OPTIMUM ELEMENT POSITIONS (IN λ ) FOR CASE 1 (BW=
° = ° 90 , 30
d
θ ). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
VII. OPTIMUM WEIGHTS FOR CASE 1 (BW= ° = ° 90 , 60
d
θ ). . . . . . . . . . . . . 89
VIII. OPTIMUM ELEMENT POSITIONS (IN λ ) FOR CASE 2 (BW=
° = ° 90 , 30
d
θ ). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
IX. OPTIMUM WEIGHTS FOR CASE 2 (BW= ° = ° 90 , 30
d
θ ). . . . . . . . . . . . . 90
X. OPTIMUM ELEMENT POSITIONS (IN λ ) FOR CASE 1 (BW=
° = ° 45 , 60
d
θ ). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
XI. OPTIMUM WEIGHTS FOR CASE 1 (BW= ° = ° 45 , 60
d
θ ). . . . . . . . . . . . . 93
XII. OPTIMUM ELEMENT POSITIONS (IN λ ) FOR CASE 2
(BW= ° = ° 45 , 30
d
θ ). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
XIII. OPTIMUM WEIGHTS FOR CASE 2 (BW= ° = ° 45 , 60
d
θ ). . . . . . . . . . . . . 94
XIV. OPTIMUM ELEMENT POSITIONS (IN λ ) FOR CASE 1 WITH
DIPOLES (BW= ° = ° 90 , 60
d
θ ). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
x
Table Page
XV. OPTIMUM WEIGHTS FOR CASE 1 WITH DIPOLES
(BW= ° = ° 90 , 60
d
θ ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
XVI. OPTIMUM ELEMENT POSITIONS (IN λ ) AND SLL FOR CASE 2
WITH DIPOLES (BW= ° = ° 90 , 30
d
θ ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
XVII. OPTIMUM WEIGHTS FOR CASE 2 WITH DIPOLES (BW= ° = ° 90 , 30
d
θ
). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . \97
XVIII. OPTIMAL WEIGHTS FOR 7ELEMENT HEXAGONAL ARRAY. . . .110
XIX. OPTIMAL WEIGHTS WITH ASSOCIATED POSITIONS. . . . . . . . . . . 114
XX. NUMBER OF REQUIRED PARTICLES FOR PSO AND
COMPUTATION TIME FOR N=47. . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
XXI. OPTIMAL SLL AND POSITIONS FOR CASE 1 (DIMENSIONS
IN λ ). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
XXII. OPTIMAL WEIGHTS FOR CASE 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
XXIII. OPTIMAL SLL AND POSITIONS FOR CASE 2 (DIMENSIONS
IN λ ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
XXIV. OPTIMAL WEIGHTS FOR CASE 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
XXV. OPTIMAL SLL AND POSITIONS FOR CASE 1 OF PATCH
ELEMENTS (UNITS OF λ ). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
XXVI. OPTIMAL WEIGHTS FOR CASE 1 WITH PATCH ELEMENTS . . . . .126
XXVII. OPTIMAL SLL AND POSITIONS FOR CASE 2 OF PATCH
ELEMENTS (UNITS OF λ ). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
xi
Table Page
XXVIII. OPTIMAL WEIGHTS FOR CASE 2 WITH PATCH ELEMENTS . . . . .128
XXIX. OPTIMAL SLL AND POSITIONS FOR OMNIDIRECTIONAL
ELEMENTS (UNITS OF
c
λ , FBW=0.5). . . . . . . . . . . . . . . . . . . . . . . . . . 135
XXX. OPTIMAL WEIGHTS FOR OMNIDIRECTIONAL ELEMENTS
(FBW=0.5). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
XXXI. OPTIMAL SLL AND POSITIONS FOR PATCH ELEMENTS
(UNITS OF
c
λ , FBW=0.2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141
XXXII. OPTIMAL WEIGHTS FOR PATCH ELEMENTS (FBW=0.2). . . . . . . .141
xii
LIST OF FIGURES
Figure Page
1. Elevation (a) and azimuthal (b) patterns for a short dipole. . . . . . . . . . . . . . . . . 10
2. Arbitrary antenna array geometry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3. Spatial processing of antenna array signals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4. Magnitude of array factor for N=5 elements. . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5. Magnitude of the array factor (dB) for 2d array. . . . . . . . . . . . . . . . . . . . . . . . . 21
6. Array factor of steered linear array. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
7. Array pattern with weights from Schelkunoff method. . . . . . . . . . . . . . . . . . . . 27
8. DolphChebyshev array for N=6 with sidelobes at 30 dB. . . . . . . . . . . . . . . . . 29
9. Array factor magnitudes for MMSE weights. . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
10. MSE at each iteration, along with the optimal MSE. . . . . . . . . . . . . . . . . . . . . . 37
11. Symmetric linear array. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
12. Array factor for optimal weights found via linear programming. . . . . . . . . . . . 48
13. Examples of convex sets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .49
14. Examples of nonconvex sets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
15. Illustration of a convex function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
16. Optimum N=4 element array (measured in units of λ ). . . . . . . . . . . . . . . . . . . 70
17. Optimum N=5 element array (measured in units of λ ). . . . . . . . . . . . . . . . . . . 71
18. Optimum N=6 element array (measured in units of λ ). . . . . . . . . . . . . . . . . . . 72
19. Optimum N=7 element array (measured in units of λ ). . . . . . . . . . . . . . . . . . . 74
20. Basic setup of a linear Nelement array. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
21. Magnitude of array factor for optimal arrays (N=6) . . . . . . . . . . . . . . . . . . . . . .91
xiii
Figure Page
22. Magnitude of array factor for optimal arrays (N=7) . . . . . . . . . . . . . . . . . . . . . . 91
23. Magnitude of array factor for optimal arrays (N=6) . . . . . . . . . . . . . . . . . . . . . .95
24. Magnitude of array factor for optimal arrays (N=7) . . . . . . . . . . . . . . . . . . . . . .95
25. Magnitude of the total radiation pattern for optimal arrays of dipoles (N=6). . . 98
26. Magnitude of the total radiation pattern for optimal arrays of dipoles (N=7). . . 98
27. Arbitrary planar array. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
28. Suppression region for twodimensional arrays. . . . . . . . . . . . . . . . . . . . . . . . 107
29. Array factors for optimal weighted and phasetapered array ( ° = 0 φ ). . . . . . . 109
30. Array factors for optimal weighted and phasetapered array ( ° = 45 φ ). . . . . . 110
31. AF for phasetapered weights; (a) elevation plot, (b) azimuth plot. . . . . . . . . 112
32. Suppression region for an array scanned away from broadside. . . . . . . . . . . . 113
33. Azimuth plot of array factors with optimal and phasetapered weights. . . . . . 114
34. Elevation plot of array factors with optimal and phasetapered weights. . . . . .115
35. Optimal symmetric array locations for Case 1 (dimensions in λ ). . . . . . . . . .
118
36. Magnitude of ) (θ T at distinct azimuthal angles (Case 1), N=7. . . . . . . . . . . . 119
37. Optimal symmetric array locations for Case 2 (dimensions in λ ). . . . . . . . . .
121
38. Magnitude of ) (θ T at distinct azimuthal angles (Case 2), N=7. . . . . . . . . . . . 121
39. Magnitude of patch pattern (in dB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
40. Optimal symmetric patch array locations for Case 1 (units of λ ). . . . . . . . . . 125
xiv
41. Magnitude of ) (θ T at distinct azimuth angles (Case 1), N=7 (patch). . . . . . . 126
42. Optimal symmetric patch array locations for Case 2 (units of λ ). . . . . . . . . . 127
Figure Page
43. Magnitude of ) (θ T at distinct azimuth angles (Case 2), N=7 (patch). . . . . . .
128
44. Suppression region for twodimensional arrays over a frequency band. . . . . . 132
45. Optimal symmetric array locations for FBW=0.5 (units of
c
λ ). . . . . . . . . . . . 134
46. Magnitude of ) (θ T at distinct azimuth angles (N=7) for
L
f f = . . . . . . . . . .
.136
47. Magnitude of ) (θ T at distinct azimuth angles (N=7) for
c
f f = . . . . . . . . . . .137
48. Magnitude of ) (θ T at distinct azimuth angles (N=7) for
U
f f = . . . . . . . . . . 138
49. Optimal symmetric patch array locations for FBW=0.2 (units of
c
λ ). . . . . . . 140
50. Magnitude of ) (θ T at distinct azimuth angles (N=7) for
L
f f = . . . . . . . . . .
.142
51. Magnitude of ) (θ T at distinct azimuth angles (N=7) for
c
f f = . . . . . . . . . . .143
52. Magnitude of ) (θ T at distinct azimuth angles (N=7) for
U
f f = . . . . . . . . . . 144
I. INTRODUCTION
1.1. Overview
On December 12, 1901, Guglielmo Marconi successfully received the first trans
atlantic radio message [1]. The message was the Morsecode for the letter ‘S’ – three
short clicks. This event was arguably the most significant achievements in early radio
communication. This communication system, while technically functional, clearly had
significant room for improvement.
A century of improvement in the field of wireless communication has occurred.
The envelope has been pushed in every imaginable direction, with no letup in progress
likely in the foreseeable future. Development in the fields of electronics, information
theory, signal processing, and antenna theory have all contributed to the ubiquity of
wireless communication systems today. However, despite the tremendous advances since
the days of Marconi in each of these fields, the desire for improved wireless
communication systems has not been quenched.
The concept of an antenna array was first introduced in military applications in the
1940s [2]. This development was significant in wireless communications as it improved
the reception and transmission patterns of antennas used in these systems. The array also
enabled the antenna system to be electronically steered – to receive or transmit
information primarily from a particular direction without mechanically moving the
structure.
As the field of signal processing developed, arrays could be used to receive energy
(or information) from a particular direction while rejecting information or nulling out the
energy in unwanted directions. Consequently, arrays could be used to mitigate
2
intentional interference (jamming) or unintentional interference (radiation from other
sources not meant for the system in question) directed toward the communication system.
Further development in signal processing led to the concept of adaptive antenna
arrays. These arrays adapted their radiation or reception pattern based on the
environment they were operating in. This again significantly contributed to the capacity
available in wireless communication systems.
While there has been a large amount of work on the signal processing aspects (and
in conjunction, the electronics used to implement the algorithms), the physical geometry
(or location of the antenna elements in the array) has received relatively little attention.
The reason for this lies in the mathematical complexity of dealing with the optimization
of the element positions for various situations. As shown in Chapter 2, understanding the
influence of the element weighting (which is a major component of the signal processing
involved in antenna arrays) is significantly simpler than understanding the effect of
varying the positions of the elements.
Thanks to the tremendous advances in numerical computing, optimization of the
element positions in an antenna array (for various situations) is now tractable. The
primary goal of this dissertation is to study the influence of array geometry on wireless
system performance. It will be shown that performance gains can be obtained via
intelligent selection of the array geometry. Array geometry optimization can therefore be
hoped to contribute to the continuing advancement of wireless communication system
performance.
3
This dissertation is organized as follows. Chapter 2 introduces the main ideas and
terminology used in understanding antenna arrays. Chapter 3 discusses various
optimization methods used in this work. Chapter 4 discusses methods of choosing the
weighting vector applied in the antenna array. Chapters 24 are primarily a collection of
other’s work.
Chapters 57 represent the author’s original research for this dissertation. Chapter 5
deals with a specific problem in a wireless communication system, namely interference
suppression in an adaptive array. An optimization problem is derived whose solution
yields an optimal array for a given interference environment, as defined in that chapter.
Solutions of this optimization problem (that is, array geometries) are presented for a
specific situation and the gains in performance are illustrated.
Chapter 6 deals with the minimum possible sidelobe level for a linear antenna array
with a fixed number of elements. A method of determining the optimal sidelobe
minimizing weight vector is determined that holds for an arbitrary antenna type, scan
angle, and beamwidth. This method of weight selection, coupled with a geometrical
optimization routine, yield a lower bound on sidelobe levels in linear antenna arrays. The
minimum sidelobe levels of arrays with an optimized geometry are compared to those
with a standard (or nonoptimized) geometry. The methods are employed on arrays of
varying size and beamwidths, and with different types of antenna elements.
Chapter 7 deals with the determination of minimum sidelobe levels in planar or
twodimensional arrays. The method of weight selection is extended from the linear to
the planar case along with the geometrical optimization routine. Twodimensional arrays
4
are optimized of varying sizes and beamwidths, and made up of different antenna types.
The narrowband assumption is then discarded and optimal weights are derived for the
wideband situation. Optimal geometries are then presented for the wideband case for
arrays made up of both omnidirectional and patch antenna elements.
Chapter 8 summarizes the important results and presents conclusions based on the
solutions. Finally, future problems of interest are discussed. The remainder of this
chapter presents a literature survey of previous research on array geometry optimization.
1.2. Literature Survey
The first articles on improving array performance via geometry optimization dates
back to the early 1960s. Unz [3] studied linear arrays in 1960 and noted that performance
improvement could be obtained by holding the weights constant and varying the element
positions. In 1960, King [4] proposed eliminating grating lobes via element placement
in an array. In 1961, Harrington [5] considered small element perturbations in an attempt
to synthesize a desired array pattern.
The concept of ‘thinned arrays’ was introduced in the early 1960s as well. It was
noted that in large, periodically spaced antenna arrays, removing some of the elements
did not noticeably degrade the array’s performance. This method of altering an array’s
geometry was introduced by Skolnik et al. [6] and was first studied deterministically –
attempting to systematically determine the minimum number of elements required to
achieve a desired performance metric. For large arrays, the problem was tackled in a
statistical fashion to avoid the excessive amount of computation time required to
determine an optimal thinned array [7].
5
Stutzman [8] introduced a simple method of designing nonuniformly spaced linear
arrays that is based on Gaussianquadrature that involves fairly simple calculations. In
addition, he showed that by appropriate scaling of the element spacings, some of the
elements will lie in the region where the ideal source has a small excitation, and thus can
be omitted from the array (another method of array thinning).
Array geometry plays a critical roll in the directionfinding capabilities of antenna
arrays. Pillai et al. [9] shows that for linear aperiodic arrays, there exists an array that has
superior spatialspectrum estimation ability. Gavish and Weiss [10] compared array
geometries based on how distinct the steering vectors are for distinct signal directions;
they proposed that larger distinctions lead to less ambiguity in direction finding. Ang et
al. [11] also evaluated the directionfinding performance of arrays by varying the
elements’ positions based on a genetic algorithm.
Antenna arrays are also used for diversity reception, or comparing signal power at
spatially distinct locations and processing the signals based on their relative strength. A
textbook proof analyzing uniformly distributed multipath components suggest arrays will
exhibit good diversity characteristics if the antennas are separated by at least 0.4λ [12].
An analytical method of choosing a linear array geometry for a given set of weights
is presented in [13]; this method was also extended to circular and spherical arrays [14].
This method requires a specified array pattern and set of weights; it then attempts to
determine an array geometry that closely approximates the desired array pattern. The
method does not guarantee a global optimum for the element positions. In [15] the
weights are optimized and then linear array scaled to find an optimal geometry. A
6
method of perturbing element positions to place nulls in desired directions is described in
[16].
Due to the large increase in the computational capability of computers, array
geometry optimization has been under investigation recently using biologically inspired
algorithms, such as Genetic Algorithms (GA). Khodier and Christodoulou [17] used the
Particle Swarm Optimization (PSO) method to determine optimal sidelobeminimizing
positions for linear arrays assuming the weights were constant. In [18], PSO methods
were used for planar array synthesis in minimizing sidelobes, along with nullplacement.
Tennant et al. [19] used a genetic algorithm to reduce sidelobes via element position
perturbations. In [20], the authors demonstrate sidelobe minimization by choosing a
geometry based on the Ant Colony Optimization (ACO) method.
In addition to geometry considerations, the minimum possible sidelobe level for an
array is of interest. For linear, equally spaced arrays, the problem of determining the
optimal weights was solved by Dolph and published in 1946 [21]. This method is known
as the DolphChebyshev method, because Dolph uses Chebyshev polynomials to obtain
the excitation coefficients. The method returns the minimum possible nulltonull
beamwidth for a specified sidelobe level (or equivalently, the minimum possible sidelobe
level for a specified nulltonull beamwidth). This method has an implicit maximum
array spacing for a given beamwidth [22]. Riblet [23] showed that for arrays with
interelement spacing less than 2 / λ , there exists a set of weights that give a smaller null
tonull main beam than Dolph’s method. However, Riblet only derives the results for
arrays with an odd number of elements. The DolphChebyshev method produces
7
sidelobes that have equal amplitudes. A more generalized version of Dolph’s algorithm
(called an equiripple filter) is also frequently used in the design of Finite Impulse
Response (FIR) filters in the field of signal processing [24].
In 1953, DuHamel extended the work of Dolph to endfire linear arrays with an odd
number of elements [25]. Dolph’s work was also considered for the case of nonisotropic
sensors; the problem was not solved for the general case [26]. The optimum sidelobe
minimizing weights for broadside, nonuniformly spaced symmetric linear arrays with
real weights can now be found using linear programming [27]. The general case of non
uniform arrays with arbitrary scan angle, beamwidth and antenna pattern is derived in
Chapter 6. In [28], the authors attempt to simultaneously optimize the weights and the
positions of a 25element linear array using a Simulated Annealing (SA) algorithm. They
make no claim that their results are optimal, but do show the sidelobes lowered via the
optimization method. Adaptive antenna arrays began with the work of Bernard Widrow
in the 1960s [29]. Optimizing an adaptive antenna array’s geometry was performed in
[30] with regards to suppressing interference; this work is the subject of Chapter 5.
The effect of array geometry on wireless systems in urban environments using
MultipleInput MultipleOutput (MIMO) channels has been studied [31]. The array
geometry is shown to have a significant impact on the MIMO channel properties,
including the channel capacity. Because of the difficulty in examining array geometry
and determining an optimal array, the impact of geometry on performance was studied by
considering standard arrays such as the uniform linear array. The effect of array
orientation on MIMO wireless channels was investigated in [32].
II. FUNDAMENTAL CONCEPTS OF ANTENNA ARRAYS
2.1. Introduction
An antenna array is a set of N spatially separated antennas. Put simply, an array of
antennas does a superior job of receiving signals when compared with a single antenna,
leading to their widespread use in wireless applications.
Arrays in practice can have as few as N=2 elements, which is common for the
receiving arrays on cell phone towers. In general, array performance improves with
added elements; therefore arrays in practice usually have more elements. Arrays can
have several thousand elements, as in the AN/FPS85 Phased Array Radar Facility
operated by U. S. Air Force [33].
The array has the ability to filter the electromagnetic environment it is operating in
based on the spatial variation of the signals present. There may be one signal of interest
or several, along with noise and interfering signals. The methods by which an antenna
array can process signals in this manner are discussed following an elementary discussion
of antennas.
2.2. Antenna Characteristics
Throughout this dissertation, a Cartesian coordinate system with axis labels x, y,
and z will be used along with spherical coordinates θ (polar angle ranging from 0 to π ,
measured off the zaxis) and φ (azimuth angle ranging from 0 to π 2 , measured off the
xaxis). The coordinates are illustrated in Figure 1.
A physical antenna has a radiation pattern that varies with direction. By reciprocity,
the radiation pattern is the same as the antenna’s reception pattern [34], so the two can be
discussed interchangeably. The radiation pattern is also a function of frequency;
9
however, except where noted, it will be assumed a single frequency is of interest
(described by the corresponding wavelengthλ ). The radiation pattern takes different
shapes depending on how far the observation is from the antenna – these regions, in order
of increasing distance from the antenna, are commonly called the reactive nearfield
region, the radiating nearfield (Fresnel) region and the farfield (Fraunhofer) region [22].
For an antenna of maximum length D, the farfield region occurs when the following two
conditions are met:
λ
2
2D
R > (2.1)
λ >> R . (2.2)
For a modern cellular phone operating at 1.9 GHz with an antenna length of roughly D=4
cm, both inequalities are achieved for R>2 meters. In practice, antennas communicate in
the farfield region, and this is assumed throughout.
The radiated farzone field of an antenna will be described by the function
) , , ( φ θ R F . For example, the farzone field radiated by a short dipole of length L with
uniform current I is given by [33]:
θ
η
φ θ sin
2
) , , (
0
R
e jIL
R F
jkR −
= (2.3)
where 1 − = j ,
0
η is the impedance of free space, and λ π / 2 = k is the wavenumber.
The normalized field pattern will be of frequent interest in this work. This function,
denoted by ) , ( φ θ f , describes the angular variation in the reception pattern of the
antenna. For the short dipole, the normalized field pattern is expressed as
θ φ θ sin ) , ( = f . (2.4)
10
This field pattern is plotted in Figure 1. The horizontal axis in Figure 1(a) can be the x
or yaxis; due to symmetry the elevation pattern will not change.
(a) Elevation Pattern (b) Azimuthal Pattern
Figure 1. (a) Elevation and (b) azimuthal patterns for a short dipole.
Directivity (or maximum directivity) is an important antenna parameter that
describes how much more directional an antenna is from a reference source, usually an
isotropic radiator. An antenna with a directivity of 1 (or 0 dB) would be an isotropic
source; all actual antennas exhibit a directivity higher than this. The higher the
directivity, the more pointed or directional the antenna pattern will be. Directivity, D,
can be calculated from
∫ ∫
=
π π
φ θ θ φ θ
π
2
0 0
sin )] , ( [
4
2
d d f
D . (2.5)
The directivity of the short dipole discussed previously is 1.5 (1.76 dB).
11
Antennas are further described by their polarization. The polarization of an antenna
is the same as the polarization of its radiated fields. The polarization of the radiated field
is the figure traced out by the electric field at a fixed location in space as a function of
time. Common polarizations are linear, elliptical and circular polarization. The
polarization of the short dipole is linear.
If an antenna is attempting to receive a signal from an electromagnetic wave, it
must be matched to the polarization of the incoming wave. If the wave is not matched to
the antenna, part or all of the energy will not be detected by the antenna [22]. In this
dissertation, unless otherwise noted, it will be assumed that the antennas are properly
matched in polarization to the desired waves.
Further information on antennas can be found in several popular textbooks [22, 35
36]. The preceding discussion will be sufficient for the purposes in this work.
2.3. Wireless Communication
The primary purpose of antenna systems is for communication; however, they are
also used for detection [37]. The information to be transmitted or received will be
represented by m(t). The message m(t) will be assumed to be bandlimited to B Hz,
meaning almost all the energy has frequency content below B Hz. In the earlier days of
radio, m(t) had the information coded directly into the amplitude or frequency of the
signal (as in AM or FM radio). Information today is primarily encoded into digital form,
and m(t) is a train of a discrete set of symbols representing 1s and 0s. The information is
still encoded into the amplitude and phase of these symbols; however, the amplitudes and
phases now take on a discrete set of values. In the most basic form of digital
12
communication, binary phase shift keying (BPSK), m(t) is either +1 or 1 (representing a
1 or a 0), so that the information is encoded into the phase. Note that m(t) can be
complex, where the real part represents the inphase component of the signal and the
imaginary part corresponds to the quadrature component [38]. Digital communication is
used because of its high data rate, lower probability of error than in analog
communication (along with errorcorrecting codes), high spectral efficiency and high
power efficiency [12].
The message m(t) is then modulated up to the frequency used by the antenna
system. The transmitted signal s(t) is given by
t f j
e t m t s
c
π 2
) ( ) ( = (2.6)
where
c
f is the carrier (or center) frequency used by the antenna system. Note that in
general
c
f B << . Typically, the energy then lies within the frequency spectrum in a very
narrow band around
c
f , so that the transmitted signal is assumed to be a monochromatic
plane wave. If the signal is sufficiently broadband that the narrowband assumption
cannot be applied, the signal can be processed by filtering it into distinct narrow bands
and processing each separately.
In the far field the narrowband signal will have the characteristics of a
monochromatic plane wave. Assume that the wave is traveling in the direction defined
by ) , ( φ θ relative to a reference point (for instance, a receiving antenna). The wavevector
k is defined to represent the magnitude of the phase changes along the x, y, and z
directions:
13
) cos , sin sin , cos (sin
2
) , , ( θ φ θ φ θ
λ
π
= =
z y x
k k k k . (2.7)
The spatial variation of the signal can then be written as
) (
) ( ) , , , (
z k y k x k j
e t s t z y x S
z y x
+ + −
= . (2.8)
Defining the position vector as R=(x,y,z), (2.8) can be written more compactly as
R k
R
⋅ −
=
j
e t s t S ) ( ) , ( . (2.9)
Digital signal processors operating on a single antenna can only process signals based on
their time variation. Spacetime filters process signals based on their spatial and temporal
variation [39]. In order to do spatial filtering, an array of sensors is required.
2.4. Antenna Arrays
The basic setup of an arbitrary antenna array is shown in Figure 2. The location of
the
th
n antenna element is described by the vector
n
d , where
[ ]
n n n
z y x
n
= d . (2.10)
The set of locations of an Nelement antenna array will be described by the Nby3 matrix
D, where
(
(
(
(
¸
(
¸
=
N
d
d
d
D
M
2
1
. (2.11)
When the array is linear (for example, all elements placed along the zaxis), the matrix D
can be reduced to a vector.
14
Figure 2. Arbitrary antenna array geometry.
Let the output from the
th
n antenna at a specific time be
n
X . Then the output from
antenna n is weighted (by
n
w ), and summed together to produce the antenna array output,
Y, as shown in Figure 3. See chapter 3 for a discussion of weighting methods. The array
output can be written as
∑
=
=
N
n
X w Y
n n
1
. (2.12)
Defining
(
(
(
(
¸
(
¸
=
N
X
X
X
M
2
1
X (2.13)
and
15
(
(
(
(
¸
(
¸
=
N
w
w
w
M
2
1
W , (2.14)
then (2.12) can be rewritten in compact form as
X W
T
Y = , (2.15)
where T represents the transpose operator.
Figure 3. Spatial processing of antenna array signals.
16
2.5. Spatial Processing Using Antenna Arrays
Suppose the transmitted signal given by (2.9) is incident upon an Nelement antenna
array. Let the normalized field pattern for each antenna be described as a function of the
wavevector (k) and be represented by f(k). The array output is then
∑
=
⋅ −
=
N
n
f
j
e t s w t y
n
n
1
) ( ) ( ) ( k
d k
. (2.16)
If the elements are identical, (2.16) reduces to


¹

\

=
⋅ −
=
∑
N
n
j
e w f t s t y
n
n
1
) ( ) ( ) (
d k
k . (2.17)
The quantity in parenthesis is referred to as the array factor (AF). Hence, the output is
proportional to the transmitted signal, multiplied by the element factor and the array
factor. This factoring is commonly called pattern multiplication, and it is valid for arrays
with identical elements oriented in the same direction.
A very general form for the output of an array is when there are G incident signals
(with wavevectors G i
i
K , 2 , 1 , = k ) incident on N antennas with distinct patterns (given
by N i f
i
, , 2 , 1 ), ( K = k ). Then the output is
∑ ∑
= =
⋅ −
=
N
n
G
i
j
e w f t s t y
n i
n i n i
1 1
) ( ) ( ) (
d k
k . (2.18)
For onedimensional arrays with elements along the zaxis (linear array),
) 0, , 0 (
n n
z = d . (2.19)
Using (2.7), the AF reduces to
17
∑
=
−
=
N
n
z j
e w AF
n
n
1
cos
2
θ
λ
π
. (2.20)
The onedimensional array factor is only a function of the polar angle. Hence, the array
can filter signals based on their polar angle θ but cannot distinguish arriving signals
based on the azimuth angle φ .
For twodimensional arrays with elements on the xy plane, the array factor
becomes [22]
∑
=
+ −
=
N
n
y x j
e w AF
n n
n
1
) sin sin cos sin (
2
φ θ φ θ
λ
π
. (2.21)
The array factor is a function of both spherical angles and can therefore filter signals
based on their azimuth and elevation angles.
The effect of the array on the received signal as a function of the angle of arrival is
now illustrated by examining the array factor. An Nelement array will be analyzed. For
simplicity let 1 =
n
w for all n, and let ) 2 / 0, , 0 ( λ n
n
= d . Then (2.19) reduces to
∑
=
−
=
N
n
jn
e AF
1
cosθ π
. (2.22)
Using the identity
c
c
c
N N
n
n
−
−
=
∑
−
=
1
1
1
0
, (2.23)
it follows that (2.21) can be written as


¹

\

−
−
−
−
=
θ π
θ π
θ π
cos
1
cos
1
cos
j
e
jN
e
j
e AF . (2.24)
18
After factoring, the above equation simplifies to

¹

\


¹

\






¹

\

−
−
=
2
cos
sin
2
cos
sin
cos
2
cos
2
cos
θ π
θ π
θ
π
θ
π
θ π
N
j
e
N
j
e
j
e AF . (2.25)
The magnitude of the array factor is plotted in Figure 4 for an array with N=5 elements,
normalized so that the peak of the array factor is unity or 0 dB. The magnitude of the
array factor shows that the array will receive (or transmit) the maximum energy when
° = 90 θ . Manipulation of the weights will allow the array factor to be tailored to a
desired pattern, which is the subject of Chapter 3. In addition, the response of the array
factor is strongly influenced by the specific geometry (D) used. Selection of the weights
is a simpler problem, as they array factor is a linear function of the weights. The array
factor is a much more complicated function of the element positions; hence, optimizing
array geometry is highly nonlinear and exponentially more difficult.
19
Figure 4. Magnitude of array factor for N=5 elements.
Directivity can be calculated for an array factor in the same manner as that of an
antenna. In addition, important parameters of array factors include beamwidth and
sidelobe level. The beamwidth is commonly specified as nulltonull or halfpower
beamwidth. The nulltonull beamwidth is the distance in degrees between the first nulls
around the mainbeam. The halfpower beamwidth is the distance in degrees between the
halfpower points (or 3 dB down on the array factor) around the mainbeam. The sidelobe
level is commonly specified as the peak value of the array factor outside of the
mainbeam.
As an example, the array factor for a 3x3 rectangular array is examined. The
weights will again be uniform; i.e. 1 =
n
w for all n. The positions for the N=9 element
20
array will be ) 0 , 2 / , 2 / ( λ λ b a
ab
= d for a,b=0,1,2. From (2.21), the array factor
becomes
∑ ∑
= =
+ −
=
2
0
2
0
) sin cos ( sin
b a
b a j
e AF
φ φ θ π
. (2.26)
Applying the sum formula (2.23) twice, (2.26) reduces to


¹

\

−
−
−
−


¹

\

−
−
−
−
=
φ θ π
φ θ π
φ θ π
φ θ π
sin sin
1
sin sin 3
1
cos sin
1
cos sin 3
1
j
e
j
e
j
e
j
e
AF . (2.27)
By factoring, (2.27) can be written as
∗


¹

\

−
−


¹

\

−
−
=
2 / sin sin
2 / sin sin 3
2 / cos sin
2 / cos sin 3
φ θ π
φ θ π
φ θ π
φ θ π
j
e
j
e
j
e
j
e
AF
( ) ( )


¹

\



¹

\

) 2 / sin sin sin(
2 / sin sin 3 sin
) 2 / cos sin sin(
2 / cos sin 3 sin
φ θ π
φ θ π
φ θ π
φ θ π
. (2.28)
For ease in plotting, the following variables will be introduced:
φ θ
π
λ
cos sin
2
= =
x
k
u (2.29)
φ θ
π
λ
sin sin
2
= =
y
k
v . (2.30)
The magnitude of the array factor is plotted in Figure 5. The sidelobes are 9.54 dB down
from the main lobe (which is normalized to 0 dB in the figure).
Figure 5. Magnitude of the array factor (dB) for 2
Beamwidths are more difficult to specify when the array factor is two dimensional
Commonly, beamwidths are specified in certain planes (for instance, elevation and
azimuthal planes) and given in half
case. The sidelobe level is again the maximum value of the array factor outside
main beam.
2.6. Aliasing
The steering vector
an array for a given wavevector
Figure 5. Magnitude of the array factor (dB) for 2D array.
Beamwidths are more difficult to specify when the array factor is two dimensional
Commonly, beamwidths are specified in certain planes (for instance, elevation and
azimuthal planes) and given in halfpower or nulltonull form, as in the one
case. The sidelobe level is again the maximum value of the array factor outside
(v) is the vector of propagation delays (or phase changes) across
wavevector, k. It can be written mathematically as
21
D array.
Beamwidths are more difficult to specify when the array factor is two dimensional.
Commonly, beamwidths are specified in certain planes (for instance, elevation and
null form, as in the onedimensional
case. The sidelobe level is again the maximum value of the array factor outside of the
the vector of propagation delays (or phase changes) across
22
(
(
(
(
(
(
¸
(
¸
⋅ −
⋅ −
⋅ −
=
N
j
e
j
e
j
e
d k
d k
d k
k v
M
) (
2
1
. (2.31)
Aliasing occurs when signals propagating in distinct directions produce the same steering
vectors. In that case, the array’s response towards the two directions will be identical, so
that the array cannot distinguish the two directions. This is similar to the signal
processing version of aliasing, where if the sampling rate is too small in time, then
distinct frequencies cannot be resolved.
For uniformly spaced linear arrays, there will exist plane waves from distinct
directions with identical steering vectors if the spacing between elements, ∆, is greater
than 2 / λ . Similarly, for uniformly spaced rectangular (planar) arrays with elements on
the xy plane, there will exist distinct directions with identical steering vectors if the
element spacing in the x or ydirections is greater than 2 / λ . When aliasing exists, the
main beam may be replicated elsewhere in the pattern. These replicated beams are
referred to as grating lobes.
For arrays without a uniform structure, the distance between elements can be much
larger than 2 / λ without introducing aliasing. In this case, no two distinct angles of
arrival will produce identical steering vectors. However, while aliasing technically does
not occur, there may be steering vectors that are very similar so that grating lobes exist.
Determining whether or not this occurs for an arbitrary array is very difficult. In general,
if a nonuniform array is decided upon, the array factor can be checked to ensure that
23
grating lobes do not occur. Mathematical studies on the uniqueness of steering vectors
can be found in [4041].
III. WEIGHTING METHODS IN ANTENNA ARRAYS
3.1. Introduction
From (2.17), it is clear that the weights will have a significant impact on the output
of the antenna array. Since the array factor is a linear function of the weights, weighting
methods are well developed and can be selected to meet a wide range of objectives.
These objectives include pattern steering, nulling energy from specific directions relative
to an array, minimizing the Mean Squared Error (MSE) between a desired output and the
actual output, or minimizing the sidelobe level outside a specified beamwidth in linear
arrays. These techniques will be discussed in this chapter. In addition, adaptive signal
processing methods applied to antenna arrays will be discussed. Most of the methods
described here apply to arrays of arbitrary geometry. However, for simplicity, examples
will be presented for uniform linear arrays with halfwavelength spacing. Hence, the
element positions will be given by ) 2 / 0, , 0 ( λ n
n
= d for 1 , 1, , 0 − = N n K .
3.2. PhasedTapered Weights
The linear array of Section 2.4 had maximum response in the direction of ° = 90 θ .
The simplest method of altering the direction in which the array is steered is to apply a
linear phase taper to the weights. The phase taper is such that it compensates for the
phase delay associated with the propagation of the signal in the direction of interest. For
example, if the array is to be steered in the direction
d
θ , the weights would be given by
d
n
jn
e w
θ π cos
= . (3.1)
For these weights, the array factor becomes
∑
−
=
−
=
1
0
) cos (cos
N
n
jn
e AF
d
θ θ π
, (3.2)
25
or
) cos (cos
1
) cos (cos
1
θ θ π
θ θ π
−
−
−
−
=
d
d
j
e
N j
e
AF (3.3)
The magnitude of the array factor (normalized so that the peak is unity, or 0 dB) is
plotted in Figure 6 for N=5 and ° = 45
d
θ . The array factor has a maximum at the desired
direction, and like the result in Figure 2.5 the sidelobes are 11.9 dB down from the
mainlobe. This simple steering method can be used in two or threedimensional arrays
as well as for arbitrary scan angles.
Figure 6. Array factor of steered linear array.
3.3. Schelkunoff Polynomial Method
A weighting scheme for placing nulls in specific directions of an array factor was
developed by Schelkunoff [22, 42]. In general, an Nelement array can null signals
arriving from N1 distinct directions.
26
To illustrate the method, the array factor
∑
−
=
−
=
1
0
cos
N
n
n j
e w AF
n
θ π
(3.4)
can be rewritten as a polynomial as
∑
−
=
=
1
0
) (
N
n
z w z AF
n
n
, (3.5)
where
θ π cos j
e z
−
= . (3.6)
Since a polynomial can be written as the product of its own zeros, it follows that
∏
−
=
− =
−
2
0
) ( ) (
1
N
n
z z w z AF
n N
, (3.7)
where the
n
z are the zeros of the array factor. By selecting the desired zeros and setting
(3.7) to (3.5), the weights can be found.
As an example, assume an N=3 element array with zeros to be placed at ° 45 and
° 120 . In that case, the following values are calculated
° −
=
45 cos
0
π j
e z (3.8)
and
° −
=
120 cos
1
π j
e z . (3.9)
Arbitrarily letting 1
2 1
= =
−
w w
N
, (3.7) becomes
1 0 1 0
2
) ( ) ( z z z z z z z AF + + − = . (3.10)
27
Setting (3.10) equal to the original form of the array factor (3.5), the weights are easily
found to be:
(
(
(
¸
(
¸
+ − =
(
(
(
¸
(
¸
=
1
) (
1 0
1 0
2
1
0
z z
z z
w
w
w
w . (3.11)
The normalized array factor for the specified weights is plotted in Figure 7. As desired,
the pattern has nulls at ° 45 and ° 120 .
Figure 7. Array pattern with weights from Schelkunoff method.
3.4. DolphChebyshev Method
Often in antenna arrays it is desirable to receive energy from a specific direction
and reject signals from all other directions. In this case, for a specified main beamwidth
the sidelobes should be as low as possible. For linear, uniformly spaced arrays of
28
isotropic sensors steered to broadside ( ° = 90
d
θ ), the DolphChebyshev method will
return weights that achieve this. A weighting method for obtaining minimum sidelobes
in arbitrarily spaced arrays of any dimension, steered to any scan angle and for any
antenna type is derived in Chapter 6.
In observing array factors as in Figure 4, note that the sidelobes decrease in
magnitude away from the mainbeam. To have the lowest overall sidelobe level, the
sidelobe with the highest intensity should be decreased at the expense of raising the
intensity of the lower sidelobes. The result will be that for the minimum overall sidelobe
level, the sidelobes will all have the same peak value. Dolph observed this and employed
Chebyshev polynomials, which have equalmagnitude peak variations (or ripples) over a
certain range. By matching the array factor to a Chebyshev polynomial, the equalripple
(or constantsidelobe) weights can be obtained. The actual process is straightforward but
cumbersome to write out; for details see [22]. Several articles have been written on
efficient computation of the DolphChebyshev weights [4344].
As an example, a uniformly spaced linear array with halfwavelength spacing and
N=6 is used. The DolphChebyshev weights are calculated for a sidelobe level of 30
dB. The associated magnitude of the array factor is plotted in Figure 8. The nulltonull
beamwidth is approximately ° 60 . Note that all the sidelobes are equal in magnitude at 
30 dB.
29
Figure 8. DolphChebyshev array for N=6 with sidelobes at 30 dB.
3.5. Minimum MeanSquare Error (MMSE) Weighting
The weighting methods discussed previously have been deterministic; that is, they
have not dealt with noise or statistical representations of the desired signals or
interference. In this section, a more general beamforming technique is developed that
takes into account the statistical behavior of the signal environment.
Assume now the input to the array consists of one desired signal, s(t), with an
associated wavevector
s
k . Assume there exists noise at each antenna, ) (t n
i
. The noise
at each antenna can be written in vector form as
30
(
(
(
(
¸
(
¸
=
) (
) (
) (
) (
2
1
t n
t n
t n
t
N
M
N . (3.12)
In addition, assume there are G interferers, each having narrowband signals given by
) (t I
a
and wavevectors given by
a
k , . , 2, , 1 G a K = Using the steering vector notation
for the phase delays as in (2.31), the input to the antenna array can then be written as
∑
=
+ + =
G
a
t I t
s
t s t
a a
1
) ( ) ( ) ( ) ( ) ( ) ( k v N k v X . (3.13)
The desired output from the antenna array (or spatial filter) is
) ( ) ( t s t Y
d
= . (3.14)
The actual output is
) ( ) ( t t Y
H
X W = . (3.15)
where H is the Hermitian operator (conjugate transpose). Equation (3.15) differs from
(2.15) because the mathematics in the derivation will be simpler if the weights used are in
the form of (3.15). The error can then be written as
) ( ) ( ) ( t Y t Y t e
d
− = . (3.16)
The minimum meansquared error estimate (MMSE) seeks to minimize the expected
value of the squared magnitude of e(t). The meansquared error (MSE) is
)] ( ) ( [ MSE t e t e E
∗
= , (3.17)
where * indicates complex conjugate and E[] is the expectation operator. Expanding
(3.17) with (3.15), the MSE becomes
31
( )( )] ) ( ) ( ) ( ) ( [ MSE t s t t s t E
H H ∗
− − = W X X W . (3.18)
Multiplying the terms above, the MSE becomes
− + =
∗
)] ( ) ( [ ] ) ( ) ( [ MSE t s t s E t t E
H H
W X X W
] ) ( ) ( [ )] ( ) ( [ W X X W t t s E t s t E
H H
−
∗
. (3.19)
The first term in (3.19) can be simplified to
W X X W W X X W )] ( ) ( [ ] ) ( ) ( [ t t E t t E
H H H H
= , (3.20)
since the expectation is a linear operator and the weights are fixed. The autocorrelation
matrix,
XX
R , is defined to be
)] ( ) ( [ t t E
H
X X R
XX
= . (3.21)
The second term in (3.19) is the signal power,
2
s
σ :
)] ( ) ( [
2
t s t s E
s
∗
= σ . (3.22)
Defining
)] ( ) ( [ t s t E
∗
= X Λ , (3.23)
the third term in (3.19) becomes
Λ W X W
H H
t s t E =
∗
)] ( ) ( [ . (3.24)
Finally, the fourth term in (3.19) is just the complex conjugate of the third term:
W Λ W X
H H
t t s E = ] ) ( ) ( [ . (3.25)
Equation (3.19) can then be rewritten as
W Λ Λ W W R W
XX
H H
s
H
− − + =
2
MSE σ . (3.26)
32
The goal is to find the W that produces the minimum MSE. The gradient of (3.26) with
respect to W is
Λ W R
XX
2 2 MSE − = ∇ . (3.27)
Setting (3.27) equal to zero and solving gives the optimal weights,
opt
W :
Λ R W
XX
1 −
=
opt
. (3.28)
Equation (3.28) requires two pieces of information, the autocorrelation matrix and
the vector Λ. The inverse of the autocorrelation matrix is often estimated using the
Sample Matrix Inverse (SMI) method. The estimate is denoted with the bar overhead,
1 −
XX
R , and uses K snapshots of the input vector X to formulate the estimate.
1
1
1
) ( ) (
−
−
(
(
¸
(
¸
=
=
∑
K
k
k k
H
X X R
XX
. (3.29)
Assuming the signal of interest is uncorrelated in time with the noise and
interference, (3.29) along with (3.13) yields
) ( )] (
1
) ( ) ( ) ( ) ( ) ( [
2
s s a a
t s
G
a
t I t
s
t s E k v k v N k v Λ σ =


¹

\

=
+ + =
∗
∑
. (3.30)
Hence, the vector Λ can be determined if the direction of the signal (given by
s
k ) and
the signal power (
2
s
σ ) are known. Often the incoming direction and power can be
determined by using a training sequence to calibrate the array. The optimal weights can
be rewritten using (3.30) as
) (
1 2
s s opt
k v R W
XX
−
= σ . (3.31)
33
Equation (3.31) represents the weights that minimize the MSE. The optimal MSE is
found from substituting (3.31) into (3.26):
) ( ) ( MSE
1 4 2
s s
H
s s opt
k v R k v
XX
−
− = σ σ . (3.32)
Similar formulations can be used to formulate weights that maximize the signal to noise
ratio (SNR) when the autocorrelation matrix of the interference and noise can be
estimated [45].
As an example, consider the case of the desired signal arriving from ° =110
d
θ with
a signal power of
2
s
σ =1. Two interferers, arriving from ° = 40
1
θ and ° = 90
2
θ , each
have 10
2
=
I
σ . The array will have N=3 elements. Two cases will be considered, the
first with noise power 01 . 0
2
=
n
σ (SNR=20 dB), and the second with 1
2
=
n
σ (SNR=0
dB). The optimal weights can then be calculated using (3.31).
The resulting array factor magnitudes are plotted in Figure 9. Observe that for the
high SNR case, the pattern places nulls exactly in the directions of the interferers. For the
low SNR case, the pattern puts less emphasis on nulling out the interferers. This is
because the gain in combating independent noise sources is best obtained by combining
the received signals with equal gain [45]. Note that neither array factor is maximum
towards the signal of interest, ° =110
d
θ .
34
Figure 9. Array factor magnitudes for MMSE weights.
3.6. The LMS Algorithm
The weights discussed up until now have not been adaptable; that is, they do not
attempt to change as the signal environment changes. A weight updating strategy that
changes with its environment is known as an adaptive algorithm and adaptive signal
processing has become a field in itself. In this section, the first and arguably most widely
used adaptive algorithm is discussed, the Least Mean Square (LMS) algorithm. This
algorithm was invented by Bernard Widrow along with M. E. Hoff, Jr. and published in a
primitive form in 1960 [46]. The Applebaum algorithm [47] was developed
independently in 1966 and largely uses the same ideas.
The algorithm assumes some a priori knowledge; in this version (the spatial LMS
algorithm), the known information is assumed to be the desired signal power (
2
s
σ ) and
35
the signal direction,
s
k . The algorithm iteratively steps towards the MMSE weights. If
the environment changes, then the algorithm will step towards the new MMSE weights.
Samples of the input vector, X, will be ordered and written as X(k).
To accomplish the iterative minimization of the MSE, recall that the gradient of the
MSE as a function of the weights (W) is given by (3.26). The LMS algorithm
approximates the autocorrelation matrix at each time step by
) ( ) ( ) ( k k k
H
X X R
XX
= . (3.33)
Then the gradient of the MSE can be approximated at each time step as
) ( 2 ) ( ) ( ) ( 2 ) ( MSE
2
s s
H
k k k k k v W X X σ − = ∇ . (3.34)
To minimize the MSE, the LMS algorithm simply increments the weights in the direction
of decreasing the MSE. The update algorithm for the weights can then be written as
) ( MSE
2
) ( ) 1 ( k k k ∇ − = +
λ
W W , (3.35)
where λ is a positive scalar that controls how large the steps are for the weights.
Substituting (3.34) into (3.35) produces the LMS algorithm:
{ } ) ( ) ( ) ( ) ( ) ( ) 1 (
2
k k k k k
H
S S
W X X k v W W − + = + σ λ . (3.36)
Equation (3.36) actually represents one of the many forms of the LMS algorithm. The
versions primarily differ in the a priori knowledge required.
The algorithm’s simplicity is its primary reason for its widespread use. In addition,
it has fairly decent convergence properties and has been extensively studied. In order to
have stable results (the expected MSE will converge to a constant value), the parameter
λ should be chosen according to
36
) (
2
0
XX
R
MAX
λ
λ < < , (3.37)
where ) (
max XX
R λ is the largest eigenvalue of the autocorrelation matrix [48]. The speed
of the convergence is governed by the condition number (ratio of largest to smallest
eigenvalues) of the autocorrelation matrix [49].
As an example of the LMS algorithm, the interference and noise scenario of Section
3.3 is again considered, this time with a SNR=20 dB. The noise will be additive white
Gaussian noise (AWGN) that is independent at each antenna. The array will be the linear
array of N=5 elements with halfwavelength spacing. The algorithm is initiated with a
weight of unity applied to all elements:
(
(
(
¸
(
¸
=
1
1
) 1 ( M W . (3.38)
The parameter λ is chosen to be
) (
1 . 0
015 . 0
max XX
R λ
λ = = . (3.39)
An example run is conducted, and the resulting MSE is plotted at each iteration [from
(3.26)], along with the optimal MSE [from (3.31)] in Figure 10. The LMS algorithm is
fairly efficient in moving towards the optimal weights for this case. Since the algorithm
uses a guess of the autocorrelation matrix at each time step, some of the steps actually
increase the MSE. However, on average, the MSE decreases. This algorithm is also
fairly robust to changing environments.
37
Figure 10. MSE at each iteration, along with the optimal MSE.
Several adaptive algorithms have expanded upon ideas used in the original LMS
algorithm. Most of these algorithms seek to produce improved convergence properties at
the expense of increased computational complexity. For instance, the recursive least
square (RLS) algorithm seeks to minimize the MSE just as in the LMS algorithm [48].
However, it uses a more sophisticated update to find the optimal weights that is based on
the matrix inversion lemma [45]. Both of these algorithms (and all others based on the
LMS algorithm) have the same optimal weights the algorithms attempt to converge to,
given by (3.31).
IV. METHODS OF ANTENNA ARRAY GEOMETRY OPTIMIZATION
4.1. Introduction
The field of electromagnetics was unified into a coherent theory and set of four
fundamental equations by James Clerk Maxwell in 1879 [50]. These equations are
known as Maxwell’s equations. The first is Gauss’s law:
V
ρ = ⋅ ∇ D , (4.1)
where D is the electric flux density and
V
ρ is the volume charge density. The second
equation states that “magnetic monopoles do not exist”, and can be written in
mathematical form:
0 = ⋅ ∇ B , (4.2)
where B is the magnetic flux density. The third equation is known as Ampere’s law:
J
D
H =
∂
∂
− × ∇
t
, (4.3)
where H is the magnetic field and J is the impressed electric current density. The fourth
is Faraday’s law:
0 =
∂
∂
+ × ∇
t
B
E , (4.4)
where E is the electric field.
While there are only four equations in the set, they are complicated enough that
they can only be solved in closed form for some basic canonical shapes. As a result,
numerical methods for solving electromagnetic problems became necessary. A thorough
introduction and survey of the methods can be found in [51].
Among the most popular of the numerical methods include the finitedifference
time domain (FDTD) method developed in 1966 by Yee at Lawrence Livermore National
39
Laboratories [52]. This method discretizes space and time and computes the electric and
magnetic fields using discretized forms of Ampere’s law and Faraday’s law. The
algorithm initially computes the electric fields (assuming the magnetic fields are known)
using Ampere’s law. A small time step later, the algorithm computes the magnetic fields
at that time using Faraday’s law (along with the calculated electric field). This process is
repeated as long as desired and has been widely successful in modeling numerous
electromagnetic problems. Another popular method is the Integral Equation (IE) Method
of Moments (MoM), which numerically solves complex integral equations by assuming a
solution in the form of a sum of weighted basis functions along the structure being
analyzed. The weights are then found by introducing boundary conditions and solving an
associated matrix for the weights, thereby leading to the solution [53].
Because of the difficulty in obtaining solutions to electromagnetic problems,
optimization is not simple. Antenna arrays, being a specific class of electromagnetic
problems, are no exception. However, significant developments over the past 50 years in
the field of mathematical optimization are now being applied to electromagnetic
problems. The tremendous increase in computing power over the last few decades has
enabled complex problems to be solved and led to large advances in the fields of
numerical electromagnetics and in optimization. This chapter describes the optimization
methods that have penetrated the electromagnetic field in the late 20th century.
The first set of methods, linear programming and convex optimization problems,
are part of a class of optimization methods that are deterministic. The problems have a
40
unique solution that can be verified to be globally optimal. However, due to the complex
nature of the problems, the solutions are obtained numerically and not analytically.
The second set of methods discussed in this chapter, Simulated Annealing (SA) and
Particle Swarm Optimization (PSO), are part of a class of optimization methods that are
stochastic in nature. These methods produce solutions to the most general optimization
problems that have very little structure and cannot be solved via other methods. The
resulting solutions from these methods are unfortunately not verifiable to be globally
optimal. However, they have recently been receiving a lot of attention in the antenna
field because they can be applied to a wide range of problems and can be used to obtain
solutions that achieve a desired performance metric. In March 2007, the IEEE
Transactions on Antennas and Propagation dedicated the entire issue to optimization
techniques in electromagnetics and antenna system design. An overview of the methods
and their applications to electromagnetics can be found in [54]. Many of these papers
used techniques that were stochastic in nature, including the popular genetic algorithm
(GA)[55].
These optimization techniques are often coupled with the numerical methods
discussed previously. For instance, the PSO algorithm was used in conjunction with the
FDTD method in [56]. The genetic algorithm was used along with the method of
moments for the design of integrated antennas in [57].
4.2. Linear Programming
The most general form of a mathematical optimization problem can be expressed as
χ ∈ x
x
subject to
) ( minimize f
. (4.5)
41
Here f(x) is the objective function to be minimized, and χ is known as the feasible set, or
set of all possible solutions. In the following, ‘subject to’ will be abbreviated as ‘s. t.’.
The solution (
opt
x ) to (4.5) will have the property
χ ∈ ∀ ≤ x x x ) ( ) ( f f
opt
, (4.6)
where∀is commonly used in mathematics to state ‘for all’. The solution is not
necessarily unique but exists as long as χ is not the empty set.
A linear program (LP) is a widely studied optimization problem that has numerous
practical applications, one of which is shown at the end of this section. The theory on
this subject was developed by George Dantzig and John von Neumann in 1947 [58]. The
variables in a linear program are written as an Ndimensional vector of real numbers:
(
(
(
(
¸
(
¸
=
2
1
N
x
x
x
M
x . (4.7)
The objective function to be minimized is a linear function of the problem variables:
x c x
T
f = ) ( , (4.8)
where c is an Ndimensional (real) vector.
The feasible set χ in a linear program is a set of M affine inequalities. Each
inequality can be written in the form:
i
T
i
b ≤ x a , (4.9)
where
i
a is an Ndimensional real vector,
i
b is a real number, and . , 2, , 1 M i K =
Without any constraints, the vector x can be any vector in
N
ℜ . Each constraint in the
42
form of (4.9) divides the space
N
ℜ into two halfspaces. In two dimensions (N=2), the
divider is a straight line; in three dimensions (N=3) the divider is a plane and so on. The
resulting feasible region χ is the intersection of all of these halfspaces. The set of
constraints
M
T
M
T
T
b
b
b
≤
≤
≤
x a
x a
x a
M
2 2
1 1
(4.10)
are often abbreviated as
b Ax ≤ . (4.11)
In (4.11), A is an N M × matrix given by
(
(
(
(
(
¸
(
¸
=
T
M
T
T
a
a
a
A
2
1
M
, (4.12)
and b is an Mdimensional vector given by
(
(
(
(
¸
(
¸
=
M
b
b
b
M
2
1
b . (4.13)
The inequality sign in (4.11) is understood to be componentwise (must be satisfied for
all inequalities). The standard form of an LP can then be written as in (4.14).
b Ax
x c
≤ . .
min
t s
T
(4.14)
43
An equality constraint can be viewed as two inequality constraints, so LPs are often
written in the form given in (4.15), where C is a matrix and f is a vector. If no vector x
satisfies all the constraints, the problem is said to be infeasible.
f Cx
b Ax
x c
=
≤
. .
min
t s
T
(4.15)
Extensive work has gone into understanding the problem presented in (4.15).
Solutions found to (4.15) must satisfy a set of optimality conditions, and they can
therefore be verified to be globally optimal [59]. In addition, several numerical methods,
such as the simplex algorithm [60] and the rapid interior point method [61], have been
developed to efficiently solve the LP. As a result, if an optimization problem can be put
into the form of an LP, an optimal vector (if one exists) can be found efficiently and
verified to be globally optimal. Commonly used computational software programs,
including Mathematica and Matlab, now have built in routines for solving linear
programs.
To illustrate the utility of linear programs, a method of determining sidelobe
minimizing weights for symmetric linear arrays with real weights steered to broadside
will be presented. This follows the discussion in [27]. The results will be extended in
Chapter 6 to work for arbitrarily spaced arrays with complex weights steered to any
angle, with arbitrary antenna elements and an arbitrary beammwidth.
A symmetric linear array is an array with elements spaced symmetrically about the
origin, as shown in Figure 11.
44
Figure 11. Symmetric linear array.
An array of this type with real weights and 2N elements will have an array factor given
by
∑
=
=
N
n
d w AF
n n
1
) cos 2 cos( ) ( θ π θ . (4.16)
where
n
d is the position of the
th
n element along the zaxis. The objective is to
determine the weights that produce the lowest possible sidelobe level. The sidelobe level
will be defined as the maximum value of the magnitude of the array factor outside of a
specified beamwidth. The set of all angles in which the array factor is to be suppressed
will be written as Θ . The sidelobe level (SLL) can be written mathematically as
 ) (  max θ
θ
AF SLL
Θ ∈
= . (4.17)
45
Since the array factor is to be maximum at broadside, the following constraint is
imposed:
1 ) 90 ( = ° AF . (4.18)
The problem of minimizing the sidelobe level can then be written as an optimization
problem:
1 ) 90 ( . .
min
= ° AF t s
SLL
. (4.19)
This problem can be written as an LP in standard form. First, let t represent the
maximum sidelobe level. Sample the region Θ into R sample points (
R 2 1
, , , θ θ θ K ).
The sidelobes will be suppressed at the sample points; following the optimization
procedure, it can be verified that the sidelobes are also suppressed between the samples.
Equation (4.19) can be rewritten into the form given in (4.20).
R i t AF
AF t s
t
i
, 2, , 1 ,  ) ( 
1 ) 90 ( . .
min
K = ≤
= °
θ
(4.20)
Each one of the constraints in (4.20) can be written as an affine constraint as in (4.10).
To see this, define the problem variables to be
(
(
(
(
(
(
¸
(
¸
=
N
w
w
w
t
M
2
1
X . (4.21)
The objective function in (4.20) can be rewritten as
[ ] X c X
T
t = = 0 0 0 1 L . (4.22)
46
The equality constraint in (4.20) can be rewritten using (4.16) along with the vector X as
[ ] 1
1 1 1 0
2
1
=
(
(
(
(
(
(
¸
(
¸
N
w
w
w
t
M
L , (4.23)
or
1
0
= X a
T
. (4.24)
Finally, the inequality constraints in (4.19) can be rewritten as
t AF t
i
≤ ≤ − ) (θ , (4.25)
for R i , 2, , 1 K = . Using (4.16), the inequality on the right becomes
[ ] 0
) cos cos(2 ) cos cos(2 ) cos cos(2 1
2
1
2 1
≤
(
(
(
(
(
(
¸
(
¸
−
N
i N i i
w
w
w
t
d d d
M
L θ π θ π θ π , (4.26)
or
0 ≤ X a
T
i
. (4.27)
Similarly, the inequality on the left in (4.25) becomes
[ ] 0
) cos cos(2 ) cos cos(2 ) cos cos(2 1
2
1
2 1
≤
(
(
(
(
(
(
¸
(
¸
−
N
i N i i
w
w
w
t
d d d
M
L θ π θ π θ π , (4.28)
or
47
0 ≤ X f
T
i
. (4.29)
Using (4.22), (4.24), (4.27) and (4.29), the optimization problem of (4.20) can be
rewritten as in (4.30).
R i
t s
T
i
T
i
T
T
, 2, , 1 , 0
, 0
1 . .
min
0
K = ≤
≤
=
X f
X a
X a
X c
(4.30)
Equation (4.30) is in the same form as the standard LP in (4.15). Hence, solutions can be
rapidly found to this problem using numerical computational software and guaranteed to
be globally optimal.
As an example, consider the following 6element symmetric linear array with
positions
] 0.85 0.5 2 . 0 [ λ λ λ ± ± ± =
T
d . (4.31)
Finding weights that minimize the sidelobe level while directing the maximum to
broadside cannot be found via the DolphChebyshev method, because the array does not
have uniform spacing. The beamwidth will be ° 40 ; hence, the region of sidelobe
suppression will be
{ } } 180 110 { 70 0 ° ≤ ≤ ° ∪ ° ≤ ≤ ° = Θ θ θ . (4.32)
Using the linear programming method described in this section, the optimal weights can
be found to be
4967 . 0
1309 . 0
3724 . 0
3
2
1
=
=
=
w
w
w
, (4.33)
48
where
1
w is the weight associated with the first pair of positions in (4.31), while
2
w and
3
w are associated with the second and third pairs of positions, respectively. The resulting
array factor is plotted in Figure 12. The dashed vertical lines in Figure 12 define the
boundary of the main beam and specify the region in which the sidelobes are suppressed.
The maximum sidelobe level outside the main beam is 11.21 dB. Note that the sidelobes
are equal in magnitude, which is expected for sidelobeminimizing weights.
Figure 12. Array factor for optimal weights found via linear programming.
4.3. Convex Optimization
Convex optimization problems are a subclass of the general optimization problem
given by (4.5). They have recently received a lot of attention in the engineering
community because of their wide applicability. These applications include robotics [62],
signal processing [63], image processing [64] and information theory [65]. An excellent
text for the engineering community on convex optimization has been written [66].
A convex optimization problem is defined by two fundamental characte
feasible set χ must be convex and the objective function
convex set is defined such that for every
straight line between
1
x and
can be written as
where α is a scalar between 0 and 1; for all
Examples of convex sets are shown in Figure 13. Convex sets are convenient to
with because search algorithms can always move between the current feasible point and
the optimal point without running into the boundary of the set. Examples of non
sets are shown in Figure 14; each set contains points
between them, as in (4.34), are in the set.
signal processing [63], image processing [64] and information theory [65]. An excellent
text for the engineering community on convex optimization has been written [66].
A convex optimization problem is defined by two fundamental characte
must be convex and the objective function f(X) is a convex function. A
convex set is defined such that for every
1
x and
2
x in the set χ , then all points along a
and
2
x are in χ . Mathematically, any point between
2 1
) 1 ( x x z α α − + = ,
is a scalar between 0 and 1; for all α in this range z must be in the set.
Examples of convex sets are shown in Figure 13. Convex sets are convenient to
with because search algorithms can always move between the current feasible point and
the optimal point without running into the boundary of the set. Examples of non
sets are shown in Figure 14; each set contains points
1
x and
2
x such that not all points
between them, as in (4.34), are in the set.
Figure 13. Examples of convex sets.
49
signal processing [63], image processing [64] and information theory [65]. An excellent
text for the engineering community on convex optimization has been written [66].
A convex optimization problem is defined by two fundamental characteristics: the
) is a convex function. A
, then all points along a
. Mathematically, any point between
1
x and
2
x
(4.34)
must be in the set.
Examples of convex sets are shown in Figure 13. Convex sets are convenient to work
with because search algorithms can always move between the current feasible point and
the optimal point without running into the boundary of the set. Examples of nonconvex
such that not all points z
Figure 14. Examples of non
A function is said to be convex on a set
satisfies the following inequality:
This means that the curve of the function
connecting the two points
function f(t) shown in Figure 15. The secant line between two points
drawn; note that f(t) lies below this line every
Figure 14. Examples of nonconvex sets.
A function is said to be convex on a set χ if for any two points X and
satisfies the following inequality:
) ( ) 1 ( ) ( ) ) 1 ( ( Y X Y X f f f α α α α − + ≤ − + . (4.35)
This means that the curve of the function f will always lie below a straight line
connecting the two points f(X) and f(Y). This is illustrated for a onedimensional
shown in Figure 15. The secant line between two points x and
lies below this line everywhere between x and y.
50
and Y in χ , it
. (4.35)
will always lie below a straight line
dimensional
and y is also
Figure 15. Illustration of a convex function.
A function is said to be strictly convex if the inequality (
with a strict inequality (<). For convex functions, local minimums are alwa
minimums. For a strictly convex function, the global minimum is unique. This property
makes convex functions convenient to work with in optimization.
As an example of proving a function is convex, consider
by ( , ), ( ), (
2 1
X X
M
f f f K
set:
The goal is to show that F
Figure 15. Illustration of a convex function.
A function is said to be strictly convex if the inequality ( ≤ ) in (4.35) is replaced
with a strict inequality (<). For convex functions, local minimums are alwa
minimums. For a strictly convex function, the global minimum is unique. This property
makes convex functions convenient to work with in optimization.
As an example of proving a function is convex, consider M convex functions given
) (X . Define the function F to be the pointwise maximum of the
) ( max ) ( X X
i
i
f F = . (4.36)
F is also convex. To accomplish this, rewrite (4.36) as
) ) 1 ( ( max ) ) 1 ( ( Y X Y X α α α α − + = − +
i
i
f F . (4.37)
51
) in (4.35) is replaced
with a strict inequality (<). For convex functions, local minimums are always global
minimums. For a strictly convex function, the global minimum is unique. This property
convex functions given
to be the pointwise maximum of the
. (4.36)
is also convex. To accomplish this, rewrite (4.36) as
. (4.37)
52
Using the convexity of each function
i
f , it follows that
[ ] ) ( ) 1 ( ) ( max ) ) 1 ( ( max Y X Y X
i i
i
i
i
f f f α α α α − + ≤ − + . (4.38)
Since maximizing a sum of functions over one index must be less than maximizing each
function individually,
[ ] ) ( ) 1 ( max ) ( max ) ( ) 1 ( ) ( max Y X Y X
j
j
i
i
i i
i
f f f f α α α α − + ≤ − + . (4.39)
Equations (4.37)(4.39) show that
) ( max ) 1 ( ) ( max ) ) 1 ( ( Y X Y X
j
j
i
i
f f F α α α α − + ≤ − + , (4.40)
which proves that F is convex. Hence, the pointwise maximum of convex functions is
convex; this property will be used in Chapter 6.
Convex optimization problems are rapidly solvable with computers; the interior
point methods developed for linear programs have been efficiently extended to convex
problems [67]. Since these problems have a very general structure, they can be applied to
a wide range of practical problems. In addition, since the optimal points found can be
mathematically proven to be globally optimal, putting a problem into convex form is very
desirable. Free convex optimization packages have been written for use with Matlab;
examples packages include CVX and YALMIP and are available online. A convex
optimization problem will be derived and solved in Chapter 6 which greatly extends the
minimum sidelobe weighting vector of Section 4.2.
4.4. Simulated Annealing
The discussion now turns to stochastic optimization algorithms. These algorithms
work on the most general type of optimization problems; however, they tend to use
53
random searches and the results are not guaranteed to be globally optimal. However,
they have recently been employed extensively in the engineering community.
The simulated annealing (SA) algorithm attempts to mimic the physical process of
annealing of solids. This process involves heating a solid material up to a high
temperature and then allowing it to cool at a very slow rate. The result is that the
particles in the solid arrange themselves in the lowest energy state configuration, usually
an ordered lattice of some sort [68]. The SA algorithm attempts to optimize via the same
procedure. The algorithm was originally introduced in 1983 by Kirkpatrick in the journal
Science as a generalization of the Monte Carlo method for examining the equations of
state of nbody systems [69].
The SA algorithm requires a cost function f [also known as the objective function in
(4.5)], an initial feasible point (
1
x ), and a perturbation mechanism for obtaining new
points around the current point. The algorithm evaluates the cost function at the start
point, perturbs the point to a new point, evaluates the cost function at this point and
repeats. The following discussion describes finding a minimum.
From the current point
i
x , a candidate new point (
1 + i
x
)
) in the feasible set is chosen
using the perturbation mechanism. If the new point decreases the cost function, then
0 ) ( ) (
1
< − = ∆
+ i i i
f f f x x
)
(4.41)
and the current solution is updated according to
1 1 + +
=
i i
x x
)
. (4.42)
The algorithm does not want to only accept points that decrease the cost function; this
would cause the algorithm to find a local minimum about the initial point. If the next
54
candidate point increases the objective function, then the probability that the algorithm
updates the current solution to the candidate solution is given by

¹

\
 ∆ −
= =
+ +
T
f
P
i
i i
exp } {
1 1
x x
)
, (4.43)
where T represents the current “temperature” of the system. If T is very large, then
almost all transitions occur and the result is a random walk through the space of points
independent of the cost function. When T becomes small, only transitions that increase
the value of the cost function occur; the result is that the algorithm converges to the local
minimum of the neighborhood of points that the current point resides in. If the algorithm
does not accept the next candidate point, then it simply remains at the previous point,
i i
x x =
+1
. (4.44)
The simulated annealing algorithm starts the optimization procedure at a high
temperature (sufficiently high such that most transitions occur), and lowers the
temperature slowly enough so that a satisfactory solution is found. There are many
methods of choosing this ‘cooling schedule’; a collection of these is described in [70] and
a specific method is utilized in Chapter 5. The algorithm is stopped once transitions to
new candidate points do not occur over a large number of attempts; the algorithm is then
said to have converged.
The SA algorithm, while being relatively simple to implement, requires a good deal
of care in choosing the initial temperature and an appropriate cooling schedule so that a
globally optimum point is likely to be found. Increased confidence that the proposed
solutions are globally optimum can be obtained by running the algorithm multiple times
from various initial points.
55
4.5. Particle Swarm Optimization (PSO)
The PSO algorithm is a stochastic, evolutionary algorithm capable of effectively
optimizing difficult multidimensional optimization problems. Examples of the successful
application of the PSO algorithm in the electromagnetics community include antenna
design [71] and array geometry selection [72]. Originally introduced by Kennedy and
Eberhart in 1995 [73], the PSO algorithm has been gaining popularity over the genetic
algorithm and other evolutionary algorithms because of its simplicity in implementation
and efficient optimization. In addition, the algorithm lends itself well to parallel
processing, which is an added bonus.
The PSO algorithm attempts to mimic the behavior of birds or bees in obtaining a
food source. Initially, a flock of birds may start out in random directions searching for
food. As each individual bird travels on its path, it may find food in various locations.
The bird remembers its own ‘personal best’ location of where it had found food. In
addition, the bird may periodically fly up and survey the progress of the other birds in the
flock. In this manner, each individual bird will be aware of the ‘global best’ position, or
location found with the most food by any bird in the flock. Using this general procedure,
a flock of birds will descend on the region in the area that has a relatively high amount of
food available.
The PSO algorithm translates this behavior into a mathematical algorithm for
optimization. The PSO algorithm consists of a set of particles (the ‘swarm’), which are
analogous to the birds. The algorithm also has a cost or fitness function, which evaluates
56
the current position of each bird; this is analogous to a bird evaluating how much food is
in a certain location.
It will be assumed in the following discussion that there is a feasible set χ to be
optimized over in which each element can be represented as an Ndimensional real
vector, and a fitness function ℜ → χ : f which can evaluate each position to a real
number.
The algorithm starts with M particles selected at random positions within the
feasible set. The number M depends on the dimension and difficulty of the problem, and
is one of the parameters left to the algorithm implementer. The algorithm is iterative and
the locations will change at each time step. The
th
i particle at time t will be at the
location given by
t
i
x , where i is an integer between 1 and M, and t is an integer
specifying the current time step. Each particle will also have a randomly selected initial
velocity vector. The
th
i particle at time t will have a velocity written as
t
i
v .
In addition, each particle will record the location of its ‘personal best position’.
This is the location that the current particle has found to be the fittest (minimum) so far
along its trajectory. The personal best positions will be written for the
th
i particle as
i
p ,
and the corresponding fitness value for each of these positions will be written as
i
P .
Each particle will also be aware of the ‘global best position’, which is the position that
has been found to be the fittest so far from among all the particles and will be written as
the vector g. The global best value will also be recorded, and it will be written as
i
i
P G min = . (4.45)
57
Once the random initial positions and velocities have been chosen for each particle,
the fitness value for each of the positions is evaluated, giving the personal and global best
positions and values. The algorithm then updates the velocity and position of each
particle at every time step until the simulation is stopped.
To perform the updates, first define matrices
t
i 1
U and
t
i 2
U according to
(
(
(
(
(
¸
(
¸
=
t
aN
t
a
t
a
t
ai
u
u
u
0 0 0
0 0 0
0 0 0
2
1
L
M O M
L
L
U , (4.46)
where a is equal to 1 or 2, and each
t
ai
u is an independent uniformly distributed real
variable on [0,1]. The velocity is then updated at each time step according to
) ( ) (
1
2 2 1 1
1 − − −
− + − + =
t
i
t
i
t
i i
t
i
t
i V
t
i
c c w x g U x p U v v
1
, (4.47)
where
V
w is a real number called the ‘inertial weight’,
1
c is a real number that
accelerates the particle towards its personal best position, and
2
c is a real number that
accelerates the particle towards its global best position. In this manner, each particle
moves in a random fashion around the solution space, but is stochastically drawn towards
the particle’s previous best location and the swarm’s global best location. The inertial
weight w is a real number in the range [0,1] that controls how much the updated velocity
depends on the previous velocity. Studies on PSO have shown that 2.0 is a good choice
for both parameters
1
c and
2
c [74].
The position is then updated according to
1 1 − −
+ =
t
i
t
i
t
i
v x x , (4.48)
58
and the fitness function is evaluated at each of the new locations. Finally, the personal
best and global best positions and values are updated if possible, and then the process
repeats again.
This algorithm is relatively simple to implement but performs well on general
optimization problems in comparison to other evolutionary algorithms. The biggest
drawback to the method is that the resulting solutions cannot be verified to be globally
optimal.
V. ARRAY GEOMETRY OPTIMIZATION FOR INTERFERENCE SUPPRESSION
5.1. Introduction
In this chapter, the influence of array geometry on the performance of adaptive
antenna arrays is examined by solving a specific wireless communication problem. The
problem of interference is addressed, which can occur intentionally (as in jamming) or
unintentionally (as in wireless devices sharing a frequency band).
The steadystate weights many adaptive algorithms (for instance, LMS and RLS)
converge to are the MMSE weights in (3.31). The optimal MSE is rewritten from (3.32):
) ( ) ( MSE
1 4 2
s s
H
s s opt
k v R k v
XX
−
− = σ σ , (5.1)
which is a fair measure of the performance of an adaptive array. In (5.1) the weights are
absent because the optimal weights have already been substituted into the expression.
The signal power,
2
S
σ , and the signal direction, given by
s
k , are part of the wireless
environment and cannot be changed. The terms that remain in (5.1) are the steering
vector v(
s
k ) and the autocorrelation matrix
XX
R . The steering vector can be rewritten
from (2.31) as
(
(
(
(
(
(
¸
(
¸
⋅ −
⋅ −
⋅ −
=
N s
s
s
s
j
e
j
e
j
e
d k
d k
d k
k v
M
) (
2
1
, (5.2)
which is a nonlinear function of the array element positions. The autocorrelation matrix,
defined in (3.21), is a function of the inputs (X) to the antenna array. The input is a
summation of noise, the desired signal and the interference from various directions. The
input to the array then depends on the positions of the array, the noise power, the signal
60
power and the powers of the interferers. Since an antenna array cannot control the power
incident upon it, the autocorrelation matrix can only be altered by changing the array
geometry, D. Hence, the MSE in (5.1) is actually only a complicated function of the
geometry of the array. Naturally, the question of determining an optimum geometry for
an adaptive array arises, which is the subject of this chapter.
5.2. Interference Environment
Military communication systems will potentially be used in environments with a
large amount of cochannel interference from sources intending to impede
communication. In this situation, an antenna array is suitable for blocking interference
spatially separated from the desired signal direction.
The arrays are not operating in a unique situation in which the interference is from
known directions. As a result, it would not be prudent to optimize the array geometry for
a specific interference situation (example: 3 interferers from 3 distinct angles). Instead,
the concept of an interference environment will be introduced as a statistical
characterization of the expected directions and relative power of the interference. For
example, a cell phone tower would expect the interference to be confined to a fixed range
of elevation angles (directed towards the ground) and would not be concerned with
blocking interference from the sky. Other arrays used in more dynamic environments
may expect interference from all directions with equal probability. By optimizing an
array geometry with respect to an interference environment, it is possible to minimize the
expected (or average) interference power that is not rejected by the array. A specific
61
example of an interference environment will be detailed in Section 5.4. The task is now
to derive an optimization problem whose solution yields an optimal array geometry.
Recall the definition of the autocorrelation matrix:
)] ( ) ( [ t t E
H
X X R
XX
= , (5.3)
where the expectation is over time. Each unique interference situation in which the array
operates will have a unique autocorrelation matrix. Going one step further, the expected
autocorrelation matrix,
X
S , is now defined as
] [
XX X
R S
I
E = , (5.4)
where the expectation operator is now over the interference situations (which defines the
interference environment). A noteworthy observation is that if all the antenna elements
have the same physical orientation, then
X
S can alternatively be found by treating the
elements as isotropic sensors while simply adjusting the power levels in the interference
environment. As a simple example, suppose an array was operating in an environment in
which interference occurred from one of two distinct angles of arrival with equal
probability. Each situation would have an autocorrelation matrix associated with it,
which is written as
1 XX
R and
2 XX
R . Then the expected autocorrelation matrix is
) ( 5 . 0
2 1 XX XX X
R R S + = . (5.5)
5.3. Optimization for Interference Suppression
If it is assumed that the interference has a larger power than the signal of interest or
that there are many interferers, then the array’s primary goal is to minimize the output
power while restricting one of the weights in the array to be unity. This is similar to a
sidelobe cancellation system [47] and is also the method used in a 7element adaptive
62
array developed by Raytheon for combating interference in GPS systems [75]. By using
the power minimization technique, the array can greatly reduce the amount of
interference power that makes it into the next stage of processing (usually a temporal
filter). Note that power minimization does not attempt to place the maximum of the array
factor towards the signal of interest. This is a suboptimal technique in regards to the
MSE; however, when the interference power is much stronger than the power of the
desired signal, this technique produces weights close to those produced using the MMSE
weights. The advantage of this technique is its simplicity, as it does not require
estimating the direction of arrival of the signal of interest or its power.
The output power from the array at any time is
w XX w
H H
y y =
∗
. (5.6)
For a fixed interference situation, the average output power P is then
w P
H
XX
R w = . (5.7)
A measure of the average output power for a given interference environment P is then
w S w
XX
H
P = . (5.8)
One of the weights is restricted to be unity so that the power minimization algorithm does
not set all of the weights to zero. In addition, for practical reasons such as minimizing
the effects of mutual coupling, it will be required that the separation between elements be
at least 4 / λ . Let
ij
r be the separation between elements i and j. The problem of finding
an optimal array for interference suppression can be written as in optimization problem,
63
given in (5.9). Note that 0] 0 0 1 [
1
L =
T
e . The minimization variables are the complex
weights and the values of
ij
r .
j i
t s
ij
H
H
≠ ≥
=
for , 4 /
1 . .
min
1
λ r
e w
w S w
X
(5.9)
Assuming the locations of the antenna elements are known (or held fixed), the
optimal weight vector for this problem can be found by using Lagrange multipliers. The
Lagrangian can be expressed as
) 1 ( ) , ( − Λ + = Λ
1 X
e w w S w w
H H
L . (5.10)
Taking the gradient with respect to w of the Lagrangian and setting the result to zero,
(5.10) becomes
0 2
1
= Λ + = ∇ e w S
X opt
L , (5.11)
where
opt
w are the powerminimizing weights. Assuming that
X
S is invertible, the
weights can be solved from (5.11) as
2
1
1
e S
w
X
−
Λ −
=
opt
. (5.12)
The parameter Λcan be determined by invoking the equality constraint of (5.9)
1
2
1 1 1
=
Λ
− =
−
e S e e w
1
X
T H
opt
, (5.13)
and the property
1 1
) (
− ∗ −
=
X X
S S was used, which follows from the definition of an
autocorrelation matrix. The solution to (5.13) can be substituted into (5.12) yielding the
powerminimizing weights,
64
1
1
1
1
1
e S e
e S
w
X
X
−
−
=
T
opt
. (5.14)
Substituting (5.14) into the objective function of (5.9), the minimum value of the
objective function for a fixed geometry becomes
1 1
1
} { min
e S e
w S w
w
1
X
X
−
=
T
H
. (5.15)
The goal is now to minimize (5.15) over all array geometries that meet the
constraints in (5.9). Minimization of (5.15) is equivalent to maximizing the reciprocal;
this is true whenever a function is strictly nonnegative. Equation (5.15) is always
nonnegative because an autocorrelation matrix is always positive semidefinite [76],
positive semidefinite matrices always have nonnegative quadratic forms [77], and if a
matrix is positive semidefinite then its inverse will be as well [77]. Hence, the
minimization problem in (5.10) can be rewritten as a maximization problem as in (5.16).
The notation
mn
] [Z will be used to represent the element of the matrix Z from the
th
m
row and
th
n column, so that the optimization problem can be written as
j i t s
ij
T
≠ ≥
=
− −
for , 4 / . .
] [ max
11
1
1
1
1
λ r
S e S e
X X
. (5.16)
The objective function in (5.16) is only a function of the antenna locations and the
interference environment. Since the interference environment cannot be controlled, the
performance of the array using the optimal weights of (5.14) is only a function of the
antenna locations. The optimal element locations are those that maximize the objective
function of (5.16) subject to the specified constraints. The solution to (5.16) will not be
unique, because
X
S is invariant to translation (shifting the elements uniformly). The
65
optimization problem in (5.16) is what needs to be solved in order to determine an
optimum array geometry for a given interference environment.
5.4. Planar Array with Uniform Interference at Constant Elevation
As an example of the solution to (5.16), a planar array of N elements with the same
physical orientation is considered. This example assumes M interferers and takes the
interference to be mutually independent and arriving from a uniform distribution in the
azimuth direction ( ] 2 , 0 [ π φ ∈ measured counterclockwise from the xaxis), but from a
fixed elevation angle ( ] , 0 [ π θ ∈ measured down from the zaxis towards the plane of the
array). The elements will be located at positions in the xy plane given by
N i y x
i i i
, 2, , 1 ), 0 , , ( K = = r . (5.17)
With M interferers, the input to the array becomes
∑
=
=
M
n
t s f t
n n n
1
) ( ) ( ) , ( ) ( k v X φ θ , (5.18)
where ) , ( φ θ f is the element pattern for each antenna, while ) (t s
n
and ) (
n n
k v are the
signal and the steering vector for the
th
n interferer, respectively. For simplicity, it will be
assumed that the antenna elements do not have a pattern that varies much in the azimuth
direction. This assumption allows the element factor to be eliminated because the
interferers are assumed to come from a fixed elevation angle; hence the response of the
antenna can be lumped into the received power. Finally, the steering vectors will be
rewritten as a function of the azimuth angle (
n
φ ) only, so that (5.18) simplifies to
∑
=
=
M
n
t s t
n n n
1
) ( ) ( ) ( φ v X . (5.19)
66
The autocorrelation matrix becomes
)] ( ) ( [ t t E
H
X X R
XX
=
=
(
(
(
¸
(
¸


¹

\

=


¹

\

=
∑ ∑
H
M
n
t s
M
n
t s E
n n n n n n
1
) ( ) (
1
) ( ) ( φ φ v v . (5.20)
Assuming the interference to be independent, it follows that
( )( ) 0 ] ) ( ) ( ) ( ) ( [ =
H
m m m n n n
t s t s E φ φ v v (5.21)
because 0 )] ( ) ( [ =
∗
t s t s E
m n
for n m ≠ . For m=n, it follows that the components of the
autocorrelation matrix become
( )( )
ab
H
n n n n n n
t s t s E ] ) ( ) ( ) ( ) ( [ φ φ v v
(
¸
(
¸
+ + −
=
∗
) ( ) (
) ( ) (
b y b x a y a x
n n
y k x k j
e
y k x k j
e t s t s E
)) ( sin ) ( (cos sin
2
2
b a n b a n
n
y y x x j
e
− + − −
=
φ φ θ
λ
π
σ . (5.22)
Equation (5.20) along with (5.22) gives the components of the autocorrelation matrix,
∑
=
− + − −
=
M
n
y y x x j
e
b a n b a n
n ab
1
)) ( sin ) ( (cos sin
2
] [
2
φ φ θ
λ
π
σ
XX
R . (5.23)
The expected autocorrelation matrix can now be calculated from (5.23) and the fact
that the interference is uniformly distributed in the azimuth direction. The expected value
of (5.23) is taken, resulting in
67
2
0
2
1
)) ( sin ) ( (cos sin
2
] [
2
∫
∑
=
− + − −
=
π
π
φ
φ φ θ
λ
π
σ
n
b a n b a n
n
ab
d
M
n
y y x x j
e
E
XX
R
∫
∑
− + − −
=
=
π
π
φ
φ φ θ
λ
π
σ
2
0
2
)) ( sin ) ( (cos sin
2
1
2 n
b a n b a n
n
d
y y x x j
e
M
n
. (5.24)
In order to evaluate the integral in (5.24), the following variable substitutions are made:
ab ab b a
R x x φ cos ) ( = − (5.25)
ab ab b a
R y y φ sin ) ( = − . (5.26)
In (5.25) and (5.26),
ab
R is the distance between elements a and b. Using the
trigonometric identity
) sin( ) sin( ) cos( ) cos( ) cos( v u v u v u + = − , (5.27)
and substituting (5.25)(5.27) into (5.24), the integral in (5.24) becomes
∫
− − π
π
φ
φ φ θ
λ
π
2
0
2
) cos( sin
2
n
ab n ab
d
R j
e . (5.28)
Since the integral in (5.28) is over a complete cycle for
n
φ , the term
ab
φ will not
contribute to the integral and can be arbitrarily set to zero without influencing the result.
The Bessel function of the first kind of order n can be written in integral form as [78]
∫
+
=
−
π
φ
φ φ
π
2
0
) cos (
2
) ( d
n x j
e
j
x J
n
n
. (5.29)
Hence, (5.24) can be rewritten using (5.28) and (5.29), as
68

¹

\



¹

\

=
= =
∑
θ
λ
π
σ sin
2
1
] [ ] [
0
2
ab n ab ab
R J
M
n
E
XX X
R S . (5.30)
Equation (5.30) shows that the expected autocorrelation matrix depends on the total
interference power incident on the array, and not on the number of interferers. Equation
(5.30) along with (5.16) defines the optimization problem used to determine an array for
suppressing interference. A method of determining an optimal array is the subject of the
following section.
5.5. Using Simulated Annealing to Find an Optimal Array
The Simulated Annealing optimization algorithm described in Chapter 4 was found
to be suitable for the problem at hand. For simplicity, an elevation angle of ° = 90 θ is
chosen for the interferers. The candidate arrays (or points in the feasible space, as
discussed in Section 4.4) at every time step are represented by a real vector in
N 2
ℜ ,
which represents the x and y positions of the Nelements.
In using the SA algorithm, a circular array is chosen as the initial array. Ideally, the
initial array chosen will have no effect on the optimization result. The perturbation
mechanism is implemented by choosing a random vector in
N 2
ℜ that has an Euclidean
norm that is zeromean and with a small variance. The variance is chosen such that the
average perturbation for each element is on the order of λ 01 . 0 ; a large variance will lead
to an imprecise search of the solution space, while a small variance will lead to a long
simulation time. The 2N components of this vector are added to the xy coordinates of
the current array. If the perturbation moves the elements too close to each other ( λ 25 . 0 <
), then the perturbation is discarded and a new perturbation selected. Another constraint
69
is imposed such that all elements stay within λ 75 . 0 of the origin (center of the initial
circular array) to keep the search space finite. This number was chosen to be large
enough such that the resulting optimal arrays were not altered by this constraint.
The initial temperature
0
T was chosen such that virtually all (>99%) of
perturbations are accepted. The temperature is held constant for a fixed number (P) of
perturbations. The temperature is then multiplied by a factor u<1. The solution array is
then again perturbed P times. This process is performed until T is small enough that no
perturbations that decrease the objective function are accepted (recall the optimization
problem is one of maximization); once this happens the solutions has converged upon a
local maximum. If P is sufficiently large and the temperature decreased sufficiently
slowly, this method will converge to the global optimum [68]. In the solution for N=6,
the parameters used were u=0.99, P=50 000, and
0
T =12. The method of determining
these numbers was to use small values of u, P and
0
T , and increase them until the
simulations consistently returned the same solution starting from various initial arrays.
As u, P, and
0
T are increased, the probability of a correct (globally optimal) solution
increases; if they are decreased, the solution is less likely to be optimal. However, the
tradeoff lies in the computational time needed. The simulation for the 6element array
described below was performed using MATLAB on a computer with a 2.9 GHz
processor, and the solution time was approximately 8 hours.
The optimum array configurations for the N=4, 5 and 6 element arrays are found
using the above optimization procedure and plotted in Figures 16, 17 and 18,
respectively. The dotted circles in these figures are of radius λ 25 . 0 . The results suggest
70
that the interference suppression capabilities are best for arrays spaced as closely as
possible. They all have a center element and are surrounded by a circular array of radius
λ 25 . 0 (the minimum distance allowed). This suggests a tradeoff between interference
suppression and largely spaced arrays used for diversity or to minimize mutual coupling.
Figure 16. Optimum N=4 element array (measured in units of λ ).
71
Figure 17. Optimum N=5 element array (measured in units of λ ).
72
Figure 18. Optimum N=6 element array (measured in units of λ ).
5.6. Evaluating the Performance of the Optimal Arrays
In order to illustrate the performance of the optimum array, it will be compared to
three other standard arrays: a circular array with radius chosen such that the spacing
along the circle between elements is λ 5 . 0 as suggested in [79], a linear array with
interelement spacing λ 5 . 0 oriented along the zaxis, and a rectangular array with
interelement spacing λ 5 . 0 .
Interferers from six different angles are chosen, each randomly selected from a
uniform distribution (on [0, ° 360 ]) and all at the same elevation angle ( ° 90 ). The output
power ( w R w
XX
H
) is calculated when the weights are given by the optimal weight vector
for this specific instance, given in (5.31). This is the steadystate solution the adaptive
73
powerminimization algorithm would converge to in practice, if the first weight is fixed
at unity.
1 1
1
1
e R e
e R
w
1
XX
XX
−
−
=
T
opt
(5.31)
This process is repeated 100,000 times to form an average output power for this type of
interference environment. The results are listed in Table I, where the average output
power is given relative to the power allowed by the optimal array.
TABLE I
OUTPUT POWER COMPARISON AMONG DIFFERENT ARRAYS
Array Relative Power (N=4) Relative Power (N=5) Relative Power (N=6)
Optimal 0 dB 0 dB 0 dB
Circular 11.5 dB 16.9 dB 32.2 dB
Rectangular 7.1 dB 20.5 dB 32.4 dB
Linear 12.2 dB 24.4 dB 37.8 dB
Table I illustrates the dramatic effect that array geometry can have on the
interferencesuppression capabilities of the array. The optimum arrays performed
significantly better on average than the standard arrays used in practice, much more than
reasonably expected. The output powers for the standard arrays are much higher than
those for the optimal arrays, clearly showing their superior interferencesuppression
capabilities.
After viewing Figures 1618, one may easily conjecture what the optimal 7element
array would be. The optimization procedure is applied and confirms the solution to be
that as given in Figure 19. The interesting thing about the optimal 7element array is that
it is a hexagonally sampled planar array. In multidimensional digital signal processing, it
is well known that the optimal sampling strategy to avoid aliasing for circularly
74
bandlimited signals is a hexagonally sampled lattice [80]. Hence, while sampling for
reconstruction and sampling for interference suppression are fundamentally different, the
problems are strongly related and the optimal solutions come out the same (for the case
of a circular interference environment). This parallel strengthens the methods and
procedures applied in determining optimal arrays.
Figure 19. Optimum N=7 element array (measured in units of λ ).
The array geometry in Figure 19 is the layout of the 7element GAS1 (GPS
Antenna System) array developed by Raytheon whose primary function is to suppress
interferers or jamming [75]. The method of this paper confirms that the GAS1 geometry
used is optimum for the case of a planar array in a circular interference environment. The
elements of the GAS1 array are circular patches each operating at the dual frequencies of
L1 (1.575 GHz) and L2 (1.227 GHz).
75
The method derived above seeks to minimize output power. While this has its
advantages, the primary disadvantage is that the desired signal may be muted along with
the interferers. To get an idea of the signal to interference ratio (SIR) at the output of the
array, a few test cases are considered. In Case 1, the desired signal arrives from ° = 45
d
θ
and ° = 0
d
φ . Twelve interferers are selected from a fixed elevation angle ( ° = 90
I
θ ) and
a random azimuth angle and 30 dB InterferencetoSignal Ratio (ISR). The weight vector
used for each case is the vector that minimizes the MSE, given in (330). The process
will be repeated (random interference directions selected) 100,000 times to form an
expected SIR, given by
∑
∑
=
n
n
n
n
I
S
SIR , (5.32)
where
n
S and
n
I are the output signal power and the output interference power for the
th
n situation, respectively. The resulting SIRs for N=5, 6, and 7 elements are determined
for the optimal array along with a circular, linear, and rectangular array as before. The
results for Case 1 are given in Table II. Case 2 will be the same as Case 1 except the
signal arrives from ° = 0
d
θ . The results for Case 2 are presented in Table III. Case 3
will be the same as Case 1 except the signal arrives from ° = 90
d
θ . The results for Case
3 are presented in Table IV.
76
TABLE II
RELATIVE SIR FOR CASE 1
Array Relative SIR (N=5) Relative SIR (N=6) Relative SIR (N=7)
Optimal 0 dB 0 dB 0 dB
Circular 6.45 dB 11.4 dB 26.9 dB
Rectangular 6.41 dB 15.85 dB 25.13 dB
Linear 2.98 dB 5.82 dB 19.15 dB
TABLE III
RELATIVE SIR FOR CASE 2
Array Relative SIR (N=5) Relative SIR (N=6) Relative SIR (N=7)
Optimal 0 dB 0 Db 0 dB
Circular 3.9 dB 10.15 Db 35.9 dB
Rectangular 5.8 dB 19.25 dB 27.2 dB
Linear 5 dB 10.24 dB 21 dB
TABLE IV
RELATIVE SIR FOR CASE 3
Array Relative SIR (N=5) Relative SIR (N=6) Relative SIR (N=7)
Optimal 0 dB 0 dB 0 dB
Circular 11.3 dB 2.5 dB 4.3 dB
Rectangular 11.6 dB 0.45 dB 6.64 dB
Linear 15.1 dB 7.6 dB 4.29 dB
The results given in Tables IIIV show that on average, the optimal array boosts the
SIR compared to the other arrays. In some situations, like the N=7 arrays for Cases 1 and
2, the optimal array produces significant SIR gains compared to the standard arrays. The
linear array has some advantage in blocking interference in that it has a smaller field of
view than the other 2D arrays. This is exhibited by the slightly superior results seen for
Case 1 and 2 when N=5. However, when the signal is in the same plane as the interferers
77
(Case 3) the linear array performs poorly. Therefore, while the optimization problem was
set up to minimize output power and thereby reduce interference, it does indeed raise the
output SIR as desired.
5.7. Summary
To briefly summarize the chapter, an optimization problem (5.16) has been derived
whose solution yields an optimal array for suppressing interference. Optimizing an
adaptive antenna array’s geometry can be done by defining an interference environment,
or expected directions and level of interference. In this manner, the array is not
optimized for a specific situation, but rather optimized to maximize the performance on
average based on the expected environment the array is to operate in. A specific problem
of a circular interference environment was studied, and a method of solution was
demonstrated using the Simulated Annealing optimization algorithm. In addition, the
results implicitly show that the array geometry used has a significant effect on the array’s
performance.
VI. MINIMUM SIDELOBE LEVELS FOR LINEAR ARRAYS
6.1. Introduction
Onedimensional arrays have been extensively analyzed, dating back to the early
part of the 20
th
century. Their ubiquity in textbooks and actual applications is partly due
to the relative ease with which they are analyzed. However, the question of determining
the minimum possible sidelobe level for an Nelement linear array has yet to be
determined. Determining this for a linear array of arbitrary elements, steered to an
arbitrary angle is the goal of this chapter.
Methods of weight selection were discussed in Chapter 3. The most important for
the discussion of this chapter is the DolphChebyshev weighting method. This method
can determine minimumsidelobe weights for uniformly spaced linear arrays of omni
directional antennas. Optimizing geometry for sidelobe minimization has also been
examined via a range of techniques, as discussed in Chapter 1. Recently, [17] used the
Particle Swarm Optimization (PSO) method to determine optimum sidelobeminimizing
positions for linear arrays assuming the weights were constant.
In this chapter, the weights and positions of a linear array will be optimized to
lower sidelobes. In [13],[17] the authors force the arrays to have symmetry about the
center to keep the array factor real; the work in this chapter does not require this
restriction. In addition, the arrays will have no bounds on minimum or maximum
element separation, except in Section 6.5 where a minimum element separation is needed.
For a given linear array, a method of finding the optimum weights for minimizing
the sidelobe level is derived given:
• a beamwidth
79
• the array element’s positions
• the individual antenna’s radiation pattern
• a desired direction (
d
θ ) for the array to be scanned.
This problem will be posed in convex form; thus it can be solved without searching
through the space of weights as in [28]. The element positions will be unrestricted and
the space will be extensively searched via PSO in order to find optimum positions in
conjunction with the corresponding optimum weights. The positions found via PSO are
likely to be globally optimal as discussed in Section 6.4. Consequently, the results
presented here likely represent global bounds on the minimumpossible sidelobe levels
achievable for a given beamwidth. This information can be used by array designers in
determining how well their arrays perform compared to the best design possible, to
determine if altering the weights or element positions could potentially return a
significant improvement in performance.
6.2. Problem Setup
The basic geometry of a onedimensional linear array is shown in Figure 20. The
positions of a onedimensional Nelement linear array can be written as a vector d=
) , , (
2 1 N
d d d K , where
n
d is the position of the
th
n element measured from the origin
along the zaxis. Incoming plane waves are characterized by an angle θ (measured from
the zaxis) that specifies their direction of arrival.
80
Figure 20. Basic setup of a linear Nelement array.
Assuming a vector w=(
N
w w w , , ,
2 1
K ) of complex excitation weights, the array
factor (AF) can be rewritten from (2.20) as
∑
=
=
N
n
jkd
e w AF
n
n
1
cos
) , , (
θ
θ d w (6.1)
where λ π / 2 = k . For a given angle θ , the array factor is a function of the weight vector
and the positions of the elements. It will be assumed that the elements are identical and
oriented in the same direction. The problem can be extended to arrays with distinct
antenna elements in a straightforward manner. The total radiation pattern ) ( θ d, w, T is
then the product of the array factor and the element pattern, given by
) , ( ) ( ) ( θ θ θ d w, d, w, AF f T = , (6.2)
where ) (θ f is each elements’ radiation pattern. This chapter addresses determining the
minimum possible sidelobe level of an Nelement array for a given beamwidth.
The sidelobe level of an array intrinsically depends on the element positions (d) and
the weights (w). Determining the minimum possible sidelobe level of an Nelement
81
linear array consists of finding the optimum combination of weights and element
positions that minimize the sidelobe level. To accomplish this, Section 6.3 determines
optimum weights for a linear array with arbitrary element positions. Since the optimum
weights can be determined for every linear array, the problem reduces to finding the
optimum positions that (along with the corresponding optimum weights) yield a sidelobe
level that no other combination of weights or element positions can improve upon.
6.3. Determination of Optimum Weights for an Arbitrary Linear Array
For nonuniformly spaced linear arrays, a new method must be developed to
determine the optimal sidelobeminimizing weights. In [27], an optimization procedure
using linear programming (LP) was developed for sidelobelevel minimizing weights;
however, their results only apply for symmetric arrays and realvalued weights.
Let the positions of an arbitrarily spaced Nelement linear array be described by the
vector d. The sidelobe level will be defined as the maximum value of the total radiation
pattern outside of the main beam. The beamwidth is then the angular range in which the
radiation pattern is not to be minimized. Letting Θ represent the angles in which the
radiation pattern is to be suppressed, the sidelobe level (SLL) can be written
mathematically as
 ) , (  max θ
θ
d w, T SLL
Θ ∈
= . (6.3)
The normalized radiation pattern is constrained to be unity towards the desired direction (
d
θ ). The optimum sidelobeminimizing weights are therefore the solution to the
optimization problem given in (6.4).
82
1 ) , ( . .
 ) , (  max min
=
)
`
¹
¹
´
¦
Θ ∈ ∈
d
N
T t s
T
C
θ
θ
θ
d w,
d w,
w
(6.4)
In (6.4),
N
C is the set of all Nelement vectors with complex components, and the
complex weight vector w is the variable in the problem. This problem can be put into a
fairly simple convex optimization form, which is rapidly solvable and solutions are
guaranteed to be globally optimum.
To accomplish this, the weights are first expressed in terms of their real and
imaginary parts,
IM
n
RE
n n
jw w w + = . (6.5)
The real part of the total radiation pattern can then be written as
{ } ( ) ( )} cos sin cos cos { ) ( ) , ( Re
1
θ θ θ θ
n
IM
n
N
n
n
RE
n
kd w kd w f T − =
∑
=
d w, , (6.6)
and the imaginary part as
{ } ( ) ( )} cos sin cos cos { ) ( ) , ( Im
1
θ θ θ θ
n
RE
n
N
n
n
IM
n
kd w kd w f T + =
∑
=
d w, . (6.7)
Next, Θ is partitioned into M discrete sample points
M
θ θ θ , , ,
2 1
K . Selection of the
sample points is discussed at the end of the section. Minimizing the magnitude of the
total radiation pattern at a fixed position
i
θ , while the beam is maximum in direction
d
θ ,
can be written as an optimization problem (with variables , , , , ,
1 1
IM
N
RE
N
IM RE
w w w w K
i
t and
i
s ), given in (6.8).
83
2 2
i
min
i
s t +
s.t.
i i i
t T t ≤ ≤ − )} , ( Re{ θ d w, (6.8)
i i i
s T s ≤ ≤ − )} , ( Im{ θ d w,
1 )} , ( Re{ =
d
T θ d w,
0 )} , ( Im{ =
d
T θ d w,
In the above optimization problem,
i
t and
i
s are dummy variables. The inequality
constraints in (6.8) on the real part of the total radiation pattern can be written as a linear
inequality (for notational simplicity, let
i n
i
n
kd p θ cos = ),
(
(
(
(
(
(
(
(
(
¸
(
¸
≤
(
(
(
(
(
(
(
(
(
(
¸
(
¸
(
(
¸
(
¸
0
0
0
0
0
0
0 1  sin cos  sin cos 
0 1  sin  cos sin  cos
) (
1
1
1 1
1 1
M
M
L
L
i
i
IM
N
RE
N
IM
RE
i i i i
i i i i
i
s
t
w
w
w
w
p p p p
p p p p
f
N N
N N
θ
. (6.9)
Similarly, the constraints on the imaginary part of the total radiation pattern can also be
expressed in this form. Hence, (6.8) can be rewritten into the simpler form given in
(6.10).
2 2
i
min
i
s t +
0 Z A ≤
i
t s . . (6.10)
BZ = 0
84
In (6.10),
i
A is the matrix that describes the inequality constraints, B is a matrix that
describes the equality constraints, Z is a vector of the problem variables (
, , , , ,
2 1 1
IM
N
RE IM RE
w w w w K
i
t ,
i
s ), and 0 is a vector of zeros. The optimization problem of
(6.10) is a simple quadratic program, which is easily solved numerically [81].
This procedure can be accomplished at every location
i
θ for which the total
radiation pattern is to be suppressed. Extend the vector Z to include the weights and all
the dummy variables:
] [
2 1 2 1 1 1 M M
IM
N
RE
N
IM RE T
s s s t t t w w w w L L L = Z . (6.11)
Adding the constraints for all M positions to be included in the matrix A, the problem in
(6.10) can be extended into the form given in (6.12).
0 BZ
0 AZ
. .
max min
2 2
, , 1
=
≤
+
=
t s
s t
i i
M i L
(6.12)
This problem will minimize the array factor at all desired locations. Since
2 2
i i
s t + is a
convex function for all i , the pointwise maximum in (6.12) is also a convex function (as
derived in Chapter 4).
Finally, (6.12) does not guarantee that the magnitude of total array radiation pattern
is a maximum at
d
θ (the maximum could be anywhere outside the region Θ). To have a
maximum at
d
θ , a necessary condition is for the derivative of the squared magnitude of
the radiation pattern to be zero. Writing ) ( ) ( ) ( θ θ θ jI F T + = in terms of its real and
imaginary parts, it follows that
85
θ θ θ d
dT
T
d
dT
T
d
T dT
∗
∗ ∗
+ =
⋅
) )( ( ) )( ( I j F jI F I j F jI F ′ + ′ − + ′ − ′ + = . (6.13)
At
d
θ , the equality constraints force F=1 and I=0, which implies that the squared
magnitude of the total radiation pattern has zero derivative if
0 )} , ( Re{ = = ′
d
T
d
d
F θ
θ
d w, . (6.14)
This constraint is also a linear constraint on w, and it can be added to the matrix B.
When this is implemented, the method is sufficient to have the maximum of the radiation
pattern in the direction
d
θ (at least for all cases considered in this chapter).
In summary, the result is that (6.12) represents a convex optimization problem and
is therefore solvable, and the solutions represent global optima [66]. This problem can be
solved via a standard numerical optimization routine, or via commercial software such as
MATLAB (for example, using the function fminimax).
The only question left then is the selection of the number of sample points, M. The
only two values of θ that need to be selected are those that define the boundary of the
main beam; the remaining values can be selected fairly sparsely. Usually, a spacing of
° 5 between sample points is sufficient. Once the weights are determined, the total
radiation pattern can be plotted to show that the sidelobes are indeed suppressed as
desired, even in between the sampled points. In theory, it is desirable to choose sample
points as closely spaced as possible to guarantee the radiation pattern is suppressed. In
86
practice, the method works with sparse spacing, and it is advantageous to choose a sparse
sampling to speed up the computation time.
The method developed in this section will return weights almost identical to that of
the DolphChebyshev method for uniformly spaced linear arrays of omnidirectional
antenna elements. The discrepancy results from the DolphChebyshev method using null
to null beamwidth and suppressing sidelobes outside of that region; this method
suppresses sidelobes outside a specified beamwidth, which isn’t necessarily null to null.
However, as will be seen in the Section 6.5, the difference is extremely small and the
weights are the same to at least three significant digits for the arrays in question.
6.4. Broadside Linear Array
In this section, broadside ( ° = 90
d
θ ) Nelement linear arrays of omnidirectional [
1 ) ( = θ f ] antennas are considered. The goal is to determine the optimum element
positions for minimizing sidelobes, and consequently, the global bound on sidelobe level
for linear arrays. Suppose the desired beamwidth is ° ∆ 2 ; then the region Θ in which the
radiation pattern is to be minimized can be written as
} 180 or 0 : {
d
° ≤ ≤ ∆ + ∆ − ≤ ≤ ° = Θ θ θ θ θ θ
d
. (6.15)
The sidelobe level (SLL) for a given weight vector w can be expressed in dB as
 ) , (  log 20
max
) (
10
θ
θ
d w, w T SLL
Θ ∈
= . (6.16)
Two cases will be analyzed in this chapter. Case 1 will have a large beamwidth
° ∆ 2 = ° 60 , and Case 2 will have a relatively small beamwidth, ° ∆ 2 = ° 30 . A minimum
interelement separation of λ 25 . 0 can be enforced. However, the results are the same
87
whether or not this constraint is applied since the elements will tend to separate largely
from each other to achieve low sidelobes.
Since for any array configuration d the optimum weights can be found, the
problem now becomes one of finding the best antenna element positions to minimize the
SLL. For an array with two elements (N=2), the problem is a function of a single variable
(
1 2
d d − = element separation and hence a global optimum can easily be found). For
linear arrays with more than two elements, a search method needs to be employed that is
rapid (without excessive computation time) and accurate (solutions are consistent and no
other method leads to a better solution). The Particle Swarm Optimization (PSO) method
was found to be suitable for this task. For a detailed discussion of the details of PSO that
goes beyond the elementary discussion in Chapter 4, the reader is referred to [17, 82].
While no method short of an exhaustive search can guarantee a global optimum
for problems such as these (many local minima in the objective function), intelligent use
of PSO will give high confidence that the solutions are indeed globally optimum. For the
results presented in this work, the PSO is used by initially choosing a set of P random
arrays (these are the particles used in PSO). The arrays are described by a vector of
element positions, d. For each position vector, the optimum weights are calculated and
the sidelobe level is determined. The element positions are updated via the PSO
technique, and the process is repeated. The algorithm is run using the PSO parameters
set to 5 . 0 =
V
w , 0 . 2
1
= c , and 0 . 2
2
= c . Each run of the algorithm is executed for
approximately 20200 iterations, or long enough such that several iterations no longer
88
decrease the objective function. If repeated use of the algorithm does not produce
identical results, P is increased until consistency is achieved.
The number of particles P required for regular convergence is given in Table V.
The number of particles required appears to grow roughly as N! which indicates PSO
cannot consistently return optimum solutions for large arrays. Because the arrays (or
particles) start from random positions every time, and because the number of antenna
elements is small, this method is fairly certain to return the globally optimum array if the
number of particles is large and repeated application of the algorithm returns identical
arrays. Table V also gives the approximate time per simulation on a single 3.0 GHz
processor running MATLAB (speedup by approximately a factor of S can be obtained if
the code is written for Sprocessors in parallel).
TABLE V
NUMBER OF PARTICLES REQUIRED FOR CONVERGENCE FOR VARYING
ARRAY SIZE WITH SIMULATION TIME
Array Size (N) P Time (Hours)
2 2 0.05
3 4 0.2
4 16 1.3
5 100 7
6 700 50
7 3000 300
Results for broadside arrays of size N=27 elements are presented in Tables VIIX
(note that 0
1
= d for all arrays in this chapter). For both cases, the optimum arrays are
very regular, with either uniform or nearly uniform spacing. The weights for this case are
allowed to be complex, but are found to be real valued. The weights given in Table VII
are identical to the DolphChebyshev weights (to at least 3 significant digits). Case 2 has
89
a larger array length, consistent with results developed in [83], in which the beamwidth is
reported to decrease with increased array length. The magnitude of the array factor for
both cases is shown in Figure 21 for N=6, and the results for N=7 are shown in Figure
22. Because the results for Case 1 have uniform spacing, the method in [15] correctly
determines the optimum sidelobe level for this problem. However, this will not hold in
future scenarios.
TABLE VI
OPTIMUM ELEMENT POSITIONS (IN λ ) FOR CASE 1 (BW= ° = ° 90 , 60
d
θ )
N
2
d
3
d
Error!
Objects
cannot
be
created
from
editing
field
codes.
5
d
6
d
7
d
SLL (dB)
2 0.667 6.02
3 0.667 1.333 16.90
4 0.667 1.333 2.000 27.05
5 0.667 1.333 2.000 2.667 39.45
6 0.667 1.333 2.000 2.667 3.333 51.21
7 0.667 1.333 2.000 2.667 3.333 4.000 62.62
TABLE VII
OPTIMUM WEIGHTS FOR CASE 1 (BW= ° = ° 90 , 60
d
θ )
N
1
w
2
w
3
w
4
w
5
w
6
w
7
w
2 0.500 0.500
3 0.286 0.429 0.286
4 0.154 0.346 0.346 0.154
5 0.083 0.247 0.340 0.247 0.083
6 0.044 0.166 0.290 0.290 0.166 0.044
7 0.024 0.107 0.227 0.286 0.227 0.107 0.024
90
TABLE VIII
OPTIMUM ELEMENT POSITIONS (IN λ ) FOR CASE 2 (BW= ° = ° 90 , 30
d
θ )
N
2
d
3
d
Error!
Objects
cannot
be
created
from
editing
field
codes. 5
d
6
d
7
d
SLL (dB)
2 0.794 1.95
3 0.794 1.589 6.59
4 0.794 1.589 2.383 12.24
5 0.794 1.589 2.383 3.178 18.18
6 0.794 1.589 2.383 3.178 3.97 24.15
7 0.794 1.589 2.383 3.178 3.97 4.762 30.07
TABLE IX
OPTIMUM WEIGHTS FOR CASE 2 (BW= ° = ° 90 , 30
d
θ )
N
1
w
2
w
3
w
4
w
5
w
6
w
7
w
2 0.500 0.500
3 0.286 0.429 0.286
4 0.154 0.346 0.346 0.154
5 0.083 0.247 0.340 0.247 0.083
6 0.044 0.166 0.290 0.290 0.166 0.044
7 0.059 0.127 0.198 0.227 0.199 0.130 0.060
91
(a) , 60 ( ° = BW ) 90° =
d
θ (b) , 30 ( ° = BW ) 90° =
d
θ
Figure 21. Magnitude of array factor for optimal arrays (N=6).
(a) , 60 ( ° = BW ) 90° =
d
θ (b) , 30 ( ° = BW ) 90° =
d
θ
Figure 22. Magnitude of array factor for optimal arrays (N=7).
92
6.5. Array Scanned to 45 Degrees
Suppose now the goal is to find the optimum array pattern for an arbitrarily spaced
Nelement linear array scanned to ° 45 from broadside ( ° = 45
d
θ ). This problem is
solved in an identical manner to that in Section 6.4. However, for this problem, the
arrays tend to favor closely spaced elements. Consequently, a minimum separation of
λ 25 . 0 between elements was enforced, as in [30]. The optimum element positions and
the weight vectors are given in Tables XXIII. The magnitude of the array factor for both
cases is shown in Figure 23 for N=6, and Figure 24 plots the results for N=7.
The weights found in this section are complex. For clarity, Y X∠ is equal to
) sin( ) cos( Y jX Y X + . The positions found are very irregular. The elements favor
having at least one element separation being the minimum allowable ( λ 25 . 0 ) for many
of the cases. Comparing these results to Section 6.4, it is clear that it is more difficult for
a linear array to have low sidelobes when it is scanned away from broadside.
TABLE X
OPTIMUM ELEMENT POSITIONS (IN λ ) FOR CASE 1 (BW= ° = ° 45 , 60
d
θ ).
N
2
d
3
d
Error!
Objects
cannot
be
created
from
editing
field
codes. 5
d
6
d
7
d
SLL (dB)
2 0.508 0.76
3 0.250 1.432 5.44
4 0.250 1.512 1.762 8.74
5 0.250 1.475 1.725 1.975 12.64
6 0.250 0.707 1.442 1.912 2.162 19.60
93
7 0.250 0.734 1.369 1.850 2.108 3.935 23.00
TABLE XI
OPTIMUM WEIGHTS FOR CASE 1 (BW= ° = ° 45 , 60
d
θ )
(a) Results for N=24.
N=2 N=3 N=4
1
w
0.499 ° ∠43 425 . 0 ° ∠45 304 . 0
2
w
° ∠231 501 . 0 ° ∠248 465 . 0 ° ∠250 415 . 0
3
w
° ∠5 386 . 0 ° ∠21 408 . 0
4
w
° ∠227 297 . 0
(b) Results for N=57.
N=5 N=6 N=7
1
w
° ∠30 287 . 0 ° ∠51 231 . 0 ° ∠44 146 . 0
2
w
° ∠246 345 . 0 ° ∠260 293 . 0 ° ∠257 258 . 0
3
w
° ∠63 359 . 0 ° ∠132 172 . 0 ° ∠143 195 . 0
4
w
° ∠279 427 . 0 ° ∠37 167 . 0 ° ∠48 195 . 0
5
w
° ∠135 22 . 0 ° ∠272 292 . 0 ° ∠292 275 . 0
6
w
° ∠121 237 . 0 ° ∠145 169 . 0
7
w
° ∠25 057 . 0
TABLE XII
OPTIMUM ELEMENT POSITIONS (IN λ ) FOR CASE 2 (BW= ° = ° 45 , 30
d
θ )
N
2
d
3
d
Error!
Objects
cannot
be
created
from
editing 5
d
6
d
7
d
SLL (dB)
94
field
codes.
2 0.535 0.31
3 0.250 2.030 3.77
4 0.302 1.933 2.976 6.92
5 0.250 2.130 2.380 3.617 8.85
6 0.250 2.123 2.373 3.653 4.657 10.39
7 0.250 0.926 2.112 2.362 3.806 4.056 12.36
TABLE XIII
OPTIMUM WEIGHTS FOR CASE 2 (BW= ° = ° 45 , 60
d
θ )
(a) Results for N=24.
N=2 N=3 N=4
1
w
0.50 ° ∠47 469 . 0 ° ∠37 298 . 0
2
w
° ∠224 5 . 0 ° ∠248 524 . 0 ° ∠248 335 . 0
3
w
° ∠211 34 . 0 ° ∠226 298 . 0
4
w
° ∠330 191 . 0
(b) Results for N=57.
N=5 N=6 N=7
1
w
° ∠49 23 . 0 ° ∠49 196 . 0 ° ∠42 21 . 0
2
w
° ∠247 252 . 0 ° ∠248 218 . 0 ° ∠238 18 . 0
3
w
° ∠224 322 . 0 ° ∠223 355 . 0 ° ∠111 165 . 0
4
w
° ∠70 385 . 0 ° ∠64 361 . 0 ° ∠228 293 . 0
5
w
° ∠175 195 . 0 ° ∠169 186 . 0 ° ∠69 245 . 0
6
w
° ∠251 068 . 0 ° ∠164 163 . 0
7
w
° ∠7 162 . 0
95
(a) , 60 ( ° = BW ) 45° =
d
θ (b) , 30 ( ° = BW ) 45° =
d
θ
Figure 23. Magnitude of array factor for optimal arrays (N=6).
(a) , 60 ( ° = BW ) 45° =
d
θ (b) , 30 ( ° = BW ) 45° =
d
θ
Figure 24. Magnitude of array factor for optimal arrays (N=7).
6.6. Array of Dipoles Scanned to Broadside
96
The broadside case is considered again, this time assuming the antennas are short or
ideal dipoles having a normalized radiation pattern
θ θ sin ) ( = f . (6.17)
For this case, no minimum separation is required since the elements tend to spread out as
in Section 6.4. The problem is solved again as in Section 6.4. The resulting optimum
element positions and weights are given in Tables XIVXVII. The magnitude of the total
radiation pattern for both cases is shown in Figure 25 for N=6, and Figure 26 gives the
results for N=7.
TABLE XIV
OPTIMUM ELEMENT POSITIONS (IN λ ) FOR CASE 1 WITH DIPOLES (BW=
° = ° 90 , 60
d
θ )
N
2
d
3
d
Error!
Objects
cannot
be
created
from
editing
field
codes.
5
d
6
d
7
d
SLL (dB)
2 0.793 11.15
3 0.726 1.452 22.38
4 0.703 1.403 2.116 34.00
5 0.696 1.391 2.087 2.788 45.40
6 0.693 1.382 2.069 2.755 3.441 56.50
7 0.681 1.362 2.043 2.724 3.405 4.086 66.90
TABLE XV
OPTIMUM WEIGHTS FOR CASE 1 WITH DIPOLES (BW= ° = ° 90 , 60
d
θ )
N
1
w
2
w
3
w
4
w
5
w
6
w
7
w
2 0.500 0.500
3 0.276 0.448 0.276
97
4 0.150 0.351 0.351 0.148
5 0.082 0.25 0.343 0.246 0.079
6 0.043 0.165 0.292 0.292 0.165 0.043
7 0.023 0.106 0.227 0.288 0.227 0.106 0.023
TABLE XVI
OPTIMUM ELEMENT POSITIONS (IN λ ) AND SLL FOR CASE 2 WITH DIPOLES
(BW= ° = ° 90 , 30
d
θ )
N
2
d
3
d
Error!
Objects
cannot
be
created
from
editing
field
codes.
5
d
6
d
7
d
SLL (dB)
2 1.130 4.64
3 0.951 1.900 9.93
4 0.915 1.779 2.675 15.70
5 0.876 1.723 2.565 3.463 21.70
6 0.842 1.681 2.516 3.358 4.233 27.50
7 0.860 1.706 2.528 3.362 4.192 5.024 33.58
TABLE XVII
OPTIMUM WEIGHTS FOR CASE 2 WITH DIPOLES (BW= ° = ° 90 , 30
d
θ )
N
1
w
2
w
3
w
4
w
5
w
6
w
7
w
2 0.500 0.500
3 0.341 0.313 0.347
4 0.224 0.283 0.273 0.220
5 0.139 0.224 0.270 0.224 0.143
6 0.087 0.168 0.239 0.238 0.178 0.090
98
7 0.047 0.122 0.196 0.231 0.208 0.134 0.062
(a) , 60 ( ° = BW ) 90° =
d
θ (b) , 30 ( ° = BW ) 90° =
d
θ
Figure 25. Magnitude of the total radiation pattern for optimal arrays of dipoles (N=6).
(a) , 60 ( ° = BW ) 90° =
d
θ (b) , 30 ( ° = BW ) 90° =
d
θ
Figure 26. Magnitude of the total radiation pattern for optimal arrays of dipoles (N=7).
99
The results of this section, compared with the results of Section 6.4, show that the
optimum linear array element positions (and associated weights) will be different
depending on the type of antenna elements used in the array. The elements in this section
have a larger spacing than the broadside array considered in Section 6.4. The individual
dipole’s radiation pattern works to lower the total radiation pattern away from broadside.
Consequently, this helps to lower the overall sidelobe level, so that an array of dipoles
has lower sidelobes than an array of omnidirectional radiators. This is evident by
comparing the results of this section and Section 6.4.
6.7. Mutual Coupling
Mutual coupling is present in all antenna arrays to some degree. This coupling
affects the radiation pattern of the elements which can degrade the overall radiation
pattern [84]. In this section, the extent to which the above results vary due to mutual
coupling is considered.
Without mutual coupling, the output of the array will be written as
ideal
X . When
mutual coupling is present, the output of the array will be written as
actual
X . Because of
the linearity in Maxwell’s equations, it is reasonable to model the coupling as a linear
system. Hence the relationship between the ideal and actual array outputs can be written
as
ideal actual
CX X = , (6.18)
where C is a square matrix known as the mutual coupling matrix. This matrix can be
modeled as [85]
1
) (
−
+ = I Z C
L L
Z Z , (6.19)
100
where
L
Z is the load impedance in each element. In (619), Z is the impedance matrix,
which relates the current into each antenna to the voltage,
V=ZI. (6.20)
If the mutual coupling matrix is known, then the array input can be premultiplied
by the inverse of the coupling matrix to obtain the decoupled weights, as in (6.21).
actual
X C X
1 −
= (6.21)
Since the output of the array is given by
actual
H H
y X C w X w
1 −
= = , (6.22)
the optimal weights derived in section 6.3 can be replaced by
opt opt
w C w
1 −
= ′ . (6.23)
The resulting total radiation patterns will then be the same as presented here, to the extent
that the mutual coupling matrix model is correct. Experimental results by Huang et al.
[85] suggest that the model performs fairly well. A circular array of dipoles was
considered in that work, which will necessarily have a strong degree of mutual coupling
because the elements are each in line with the other element’s direction of maximum
radiation. Using (6.23), the compensated radiation patterns for the arrays were compared
to the ideal or noncoupled case and the results were found to be in agreement. Hence, it
is expected that arrays without such a strong degree of coupling (as commonly used in
practice), can also be accurately modeled using (6.18) and (6.19).
6.8. Conclusions
This chapter determined the limits of performance on linear arrays of size N=27.
A method of determining globally optimum weights for minimizing sidelobes for a given
101
linear array was presented. The elements were then varied in position until it was certain
that a global optimum was found. Consequently, it is very likely that no other weight
strategy or element placement scheme will lead to sidelobes lower than those presented in
this paper. These results can be used as a benchmark in comparing existing array
performance to determine if it is worth updating the array placement or weighting
strategy.
VII. MINIMIZING SIDELOBES IN PLANAR ARRAYS
7.1. Introduction
The natural next step in studying sidelobeminimization in antenna arrays is to look
at twodimensional or planar arrays. In this chapter, many of the ideas from Chapter 6
are extended to twodimensional arrays, which are mathematically similar but the total
radiation pattern is more complex.
Sidelobe minimization has received renewed interest due to the difficult nature of
the wireless channel. To block interference, it is best to place nulls in the direction of the
interference. However, this often does not work well in practice. For example, the
European standard for 3
rd
generation of mobile communication is known as Wideband
Code Division Multiple Access (WCDMA). In this scheme, the same frequency
spectrum is shared simultaneously by all users; for an indepth description, see [86].
Consequently, interference is a major problem. These systems are designed to work with
a large number of users, and since an Nelement antenna array can only place N1 nulls,
an impractically large number of antennas would be needed to null out signals from all
directions not of interest. In addition, due to multipath effects, each signal will be
arriving from several distinct angles, which further reduces the performance of a nulling
based approach. In [87], the performance of arrays with different weighting methods
used in WCDMA systems are compared. It was found that for a large number of
interferers, a low sidelobe method will outperform a nulling method. The low sidelobe
method is also preferred because no processing needs to be performed to determine
directionofarrivals for varying signals. Hence, as the capacity requirements of wireless
103
communication systems increase, methods for reducing sidelobe levels will become
increasingly important.
In addition, the WCDMA systems should not be modeled using a singlefrequency
or narrowband total radiation pattern. To address this, wideband arrays will be studied in
the latter half of this chapter. The sidelobeminimizing weighting methods will be
extended to work for wideband arrays. Previous work has been performed in an attempt
to develop wideband sidelobeminimizing weights. In [88], the author develops a
wideband weighting method that works for 2D rectangular arrays. This method does
incorporate the antenna patterns into determining the weight vector, which increases the
utility of the method. However, the results of that work assume all signals arrive from a
fixed elevation angle, which is a major restriction. In addition, the results are clearly
suboptimal in viewing the resulting sidelobes.
Other wideband weighting methods use antenna coefficients that vary with
frequency in order to improve the radiation pattern over a range of frequencies. This is
done using a tapped delay line filter in [89], and with a recursive filter in [90]. In this
chapter, the weights will continue to be constant (not a function of frequency) so that the
weights derived are easily implemented in a real system.
The chapter is organized as follows. In Section 7.2, the twodimensional sidelobe
minimization problem is addressed. In Section 7.3, sidelobeminimizing weights are
developed for twodimensional arrays of arbitrary elements. In Section 7.4, optimal
arrays and sidelobe levels are obtained for arrays of omnidirectional antennas of size
N=47 for two distinct beamwidths. In Section 7.5, optimal arrays and sidelobe levels
104
are obtained for arrays of patch (or microstrip) antennas. A method of determining
weightminimizing sidelobes over a range of frequencies is developed in Section 7.6.
Optimal arrays and sidelobe levels are obtained for wideband arrays of omnidirectional
antennas in Section 7.7. Finally, in Section 7.8 optimal arrays and sidelobe levels are
obtained for wideband arrays of patch antennas, and conclusions presented in Section 7.9.
7.2. TwoDimensional Symmetric Arrays
The elements of the array are assumed to lie in the xy plane, at z=0. The position
of the
th
n element is ) 0 , , (
n n n
y x d = as shown in Figure 27.
Figure 27. Arbitrary planar array.
The output or radiation pattern of an antenna array (or spatial filter) is given by
∑
=
=
+
N
n
e k k f w k k T
n
y
y
k
n
x
x
k j
y x n n y x
1
) , ( ) , , , (
) (
D w (7.1)
105
where
n
w is the weight multiplying the signal of the
th
n element, and ) , (
y x n
k k f is the
antenna gain for the
th
n element in the direction determined by ) , (
y x
k k . It is desired for
the array pattern to have a maximum at the desired direction, denoted ). , (
yd xd
k k
This chapter deals with twodimensional arrays with symmetry about the origin.
That is, if an element is located at ) 0 , , (
n n n
y x d = and not at the origin, there exists
another element in the array with the same weighting coefficient located at
) 0 , , (
n n n
y x d − − = − . This constraint keeps the array factor real when the weights are
real, allowing efficient computation of the results.
When the antennas in the array are identical (have the same individual radiation
pattern), the radiation pattern for the entire antenna array takes the form given in (7.2) if
there is an element at the origin (odd number of elements). If there is an even number of
elements, the radiation pattern will have the form given in (7.3).
∑
+
=
+ + =
2 / ) 1 (
2
1
) cos( 2 ) , ( ) , ( ) , , (
N
n
n y n x n y x y x y x
y k x k w k k f k k f w k k T D w, (7.2)
∑
=
+ =
2 /
1
) cos( 2 ) , ( ) , , (
N
n
n y n x n y x y x
y k x k w k k f k k T D w, (7.3)
7.3. SidelobeMinimizing Weights for TwoDimensional Arrays
A method of sidelobe minimization for symmetric linear arrays with real weights
was given as an example in Section 4.2. The results are now extended for two
dimensional arrays of nonisotropic elements. It will be assumed that the array positions
D are known, the array is to be steered toward 0 =
d
θ or ) 0 , 0 ( ) , ( =
yd xd
k k , and to have a
specified beamwidth for the main beam.
106
The discussion will follow the development in Section 4.2. The transition region is
defined as the region in ) , (
y x
k k space in which the sidelobes are not to be suppressed.
The suppression region Θ is now twodimensional, and a circular transition region will
be assumed. For a circular transition region, the cutoff region occurs when
c
θ θ ≥ . The
suppression region can be specified as
¦
)
¦
`
¹
¦
¹
¦
´
¦

¹

\

≤ + ≤ = Θ
2
2 2 2
2
: ) , (
λ
π
y x c y x
k k k k k , (7.4)
where a cutoff value ( λ θ π / sin 2
c c
k = ) is specified. The cutoff value dictates how wide
or narrow the array’s mainbeam is to be. The suppression region is illustrated in Figure
28 in ) , (
y x
k k space. The region in the ) , (
y x
k k plane with magnitude less than λ π / 2 is
commonly referred to as the visible region. Values of ) , (
y x
k k outside of this region do
not correspond to any value of ) , ( φ θ at the frequency of interest.
107
Figure 28. Suppression region for twodimensional arrays.
The suppression region can be sampled at R places as in Section 4.2. Each sample
point is denoted by ) , (
yi xi
k k for R i , 2, , 1 K = . The parameter R is chosen sufficiently
large such that when the resulting radiation pattern is plotted, it is suppressed even
between sample points. The LP problem of (420) is rewritten to include the non
isotropic element pattern in (7.5).
R i t k k T
T t s
t
yi xi
, 2, , 1 ,  ) , , ( 
1 ) 0 , 0 , ( . .
min
K = ≤
=
D w,
D w, (7.5)
The constraints in (7.5) are again linear functions of the weights, as seen in (7.27.3).
Hence, the constraints in (7.5) can be rewritten as affine inequalities exactly as done in
108
Section 4.2. The result is that the problem of finding the optimal sidelobesuppressing
weights for twodimensional array of nonisotropic elements is again a linear program,
and therefore rapidly solvable.
As an example, consider the 7element hexagonal array of Figure 19, but with a
radius of λ 77 . 0 . A weighting method with a linear phasetaper (from Section 3.2) will
be used for comparison. When the array is steered to broadside, the sidelobe level for the
linear phasetapered array is 10.9 dB. The beamwidth is ° 30 . Using this beamwidth to
define the suppression region as in Figure 28, the optimal weights can be determined.
The sidelobe level for the array factor with the optimal weights is 13.9 dB. The optimal
array factor is plotted, along with the array factor using the phasetapered weights, in
Figure 29 (elevation plot for ° = 0 φ ) and Figure 30 (elevation plot for ° = 45 φ ). The
positions and optimal weights are listed in Table XVIII.
109
Figure 29. Array factors for optimal weighted and phasetapered array ( ° = 0 φ ).
110
Figure 30. Array factors for optimal weighted and phasetapered array ( ° = 45 φ ).
TABLE XVIII
OPTIMAL WEIGHTS FOR 7ELEMENT HEXAGONAL ARRAY
Position Weight
(0, 0) 0.200
± (0.77, 0) 0.133
± (0.385, 0.667) 0.133
± (0.385, 0.667) 0.133
7.4 SidelobeMinimizing Weights for Scanned TwoDimensional Arrays
In this section, the procedure of Section 7.3 is extended for twodimensional arrays
scanned from broadside. It is assumed that the array is scanned towards ) , (
d d
φ θ , so that
the wavevector components in the desired direction are
d d xd
k φ θ
λ
π
cos sin
2
= (7.6)
111
d d yd
k φ θ
λ
π
sin sin
2
= (7.7)
Two beamwidths are specified. The first is the polar (elevation) beamwidth θ ∆ , which
is the beamwidth when the azimuth angle is fixed at
d
φ . The second is the azimuth
beamwidth φ ∆ , which is the beamwidth when the polar (elevation) angle is fixed at
d
θ .
The method of Section 7.3, in which the weights are real, will not produce optimal
weights when the array is to be steered from broadside. As a result, the complex method
of Section 6.3 must be used. Once the suppression region has been specified as in Figure
28, the method can be directly implemented for the twodimensional case.
An example will now be presented using this method. A 5x5 rectangular array with
uniform spacing of 4 / λ is used. The results are compared to the performance of an
array with a linear phase taper, as discussed in Section 3.2. The array is scanned to
) 0 , 90 ( ) , ( ° ° =
d d
φ θ . The magnitude of the array factor is plotted in Figure 31. The
maximum sidelobe level is 12.04 dB. The beamwidth selected for determining the
optimal weights is identical to the result for the linear phase taper array for comparison.
Hence, ° = ∆ 7 . 68 2 / θ and ° = ∆ 2 . 38 2 / φ .
112
Figure 31. AF for phasetapered weights; (a) elevation plot, (b) azimuth plot.
The following parameters are defined that indicate the boundary of the suppression
region in the ) , (
y x
k k plane:

¹

\
 ∆
+ =
2
cos sin
2 φ
φ θ
λ
π
φ d d x
k (7.8)

¹

\
 ∆
+ =
2
sin sin
2 φ
φ θ
λ
π
φ d d y
k (7.9)
d d x
k φ
θ
θ
λ
π
θ
cos
2
sin
2

¹

\
 ∆
− = (7.10)
d d y
k φ
θ
θ
λ
π
θ
sin
2
sin
2

¹

\
 ∆
− = (7.11)
Increasing θ for a fixed azimuth angle is equivalent to moving outward in the radial
direction in the ) , (
y x
k k plane. Increasing φ for a fixed elevation angle is equivalent to
moving in a circle (at a fixed distance from the origin) in the ) , (
y x
k k plane.
113
Consequently, the suppression region Θ has the form plotted in Figure 32 for this
example.
Figure 32. Suppression region for an array scanned away from broadside.
Employing the method of Section 6.3 with the suppression region in Figure 32, the
optimal weights can be determined, and they are listed along with their respective
positions in Table XIX. The sidelobe level is reduced to 31.2 dB, showing the
superiority of this method over the linear phasetaper method. The array factor using the
optimal weights is plotted, along with the array factor using the phasetapered weights, in
Figure 33 for an azimuth scan with a fixed elevation angle ( ° = 90 θ ). The elevation scan
is plotted in Figure 34 with a fixed azimuth angle ( ° = 0 φ ).
114
TABLE XIX
OPTIMAL WEIGHTS WITH ASSOCIATED POSITIONS
Position Weight Position Weight
(0.5, 0.5) ° ∠246 103 . 0 (0, 0.25) ° ∠288 020 . 0
(0.5, 0.25) ° ∠31 192 . 0 (0, 0.5) ° ∠355 166 . 0
(0.5, 0) ° ∠227 227 . 0 (0.25, 0.5) ° ∠235 126 . 0
(0.5, 0.25) ° ∠22 204 . 0 (0.25, 0.25) ° ∠133 179 . 0
(0.5, 0.5) ° ∠234 090 . 0 (0.25, 0) ° ∠256 291 . 0
(0.25, 0.5) ° ∠118 184 . 0 (0.25, 0.25) ° ∠142 211 . 0
(0.25, 0.25) ° ∠246 182 . 0 (0.25, 0.5) ° ∠244 108 . 0
(0.25, 0) ° ∠246 182 . 0 (0.5, 0.5) ° ∠126 066 . 0
(0.25, 0.25) ° ∠221 180 . 0 (0.5, 0.25) ° ∠346 174 . 0
(0.25, 0.5) ° ∠113 137 . 0 (0.5, 0) ° ∠143 196 . 0
(0, 0.5) ° ∠350 210 . 0 (0.5, 0.25) ° ∠350 185 . 0
(0, 0.25) ° ∠168 067 . 0 (0.5, 0.5) ° ∠137 060 . 0
(0, 0) ° ∠4 418 . 0
Figure 33. Azimuth plot of array factors with optimal and phasetapered weights.
115
Figure 34. Elevation plot of array factors with optimal and phasetapered weights.
7.5. Symmetric Arrays of Omnidirectional Elements
In this section, the antennas are omnidirectional, so that
1 ) , ( =
y x
k k f . (7.12)
The array elements are allowed to assume an arbitrary geometry, subject to the arrays
being symmetric, as in (7.2) and (7.3). The goal of this section is to determine the
geometry that, accompanied with the optimum weights of Section 7.2, yield the minimum
sidelobe level. The positions will be varied using the PSO algorithm as in Chapter 6 in
order to determine an optimal geometry for minimum sidelobes. The parameters and
method of implementation used are the same as in Chapter 6.
The algorithm is again run with P particles. The particles move around and interact
via the PSO algorithm. As the element positions are varied, the optimum weights are
116
calculated for each particle (or array) at every iteration. Consequently, the weights and
the array geometry are simultaneously optimized. The number of particles is increased
until successive runs of the algorithm return identical results. Because the algorithm
starts with random and independent particles (or arrays) every time, and because the
algorithm consistently returns identical solutions, it is likely that the results are globally
optimal.
Two cases are considered in this section. In Case 1, the beamwidth will be ° 60 ;
this indicates the sidelobes are to be suppressed when ° ≥ 30 θ . The cutoff value can be
calculated to be
λ
π
λ
π
= ° = ) 30 sin(
2
1 c
k . (7.13)
For Case 2, a smaller beamwidth of ° 30 is considered. The sidelobes will therefore be
suppressed when ° ≥ 15 θ . The cutoff value is then
λ
π
λ
π 518 . 0
) 15 sin(
2
2
= ° =
c
k . (7.14)
For symmetric arrays, there exists no optimal 2 or 3 element symmetric arrays, as
the symmetry forces the arrays to be linear. A linear array cannot suppress sidelobes in
two dimensions because the pattern of a linear array is only a function of one variable.
Results will be presented for arrays of size N=47.
The number of particles required for regular convergence is given in Table XX,
along with the average simulation time. The simulations were performed on a 3.0 GHz
processor running MATLAB.
117
TABLE XX
NUMBER OF REQUIRED PARTICLES FOR PSO AND COMPUTATION TIME FOR
N=47
N P Time (hours)
4 290 3
5 300 3.5
6 800 7
7 1000 10
The optimal arrays for Case 1 are plotted in Figure 35. For N=4, 5, 7, the optimal
arrays are close to being circular with an increasingly large radius, and a center element if
the array has an odd number of elements. The result for N=6 is distinct, as it takes a
cross shape. The optimal array when N=7 is also a hexagonal array, as discussed in
Section 5.6.
The optimal positions are listed with the sidelobe levels in Table XXI. The
corresponding optimal weights are given in Table XXII. In Figure 36, the magnitude of
the array factor is plotted as a function of the elevation angle θ for several azimuth
angles. The mainbeam is almost identical within the transition region ( ° ≤ 30 θ ) for
distinct azimuth angles. This indicates a circularly symmetric mainbeam, as expected
with a circular suppression region.
118
Figure 35. Optimal symmetric array locations for Case 1 (dimensions in λ ).
TABLE XXI
OPTIMAL SLL AND POSITIONS FOR CASE 1 (DIMENSIONS IN λ )
) , (
1 1
y x ) , (
2 2
y x
) , (
3 3
y x
) , (
4 4
y x
SLL (dB)
N=4 (0.45, 0) (0, 0.52) 5.5
N=5 (0, 0) (0.67, 0) (0, 0.67) 6.9
N=6 (0.41, 0) (1.24, 0) (0, 0.67) 7.9
N=7 (0, 0) (0.77, 0) (0.39, 0.67) (0.39, 0.67) 13.9
119
TABLE XXII
OPTIMAL WEIGHTS FOR CASE 1
1
w
2
w
3
w
4
w
N=4 0.556 0.444
N=5 0.268 0.183 0.183
N=6 0.442 0.159 0.400
N=7 0.200 0.133 0.133 0.133
Figure 36. Magnitude of ) (θ T at distinct azimuthal angles (Case 1), N=7.
The optimal arrays for Case 2 are plotted in Figure 37. The positions and sidelobe
levels for this case are presented in Table XXIII. The optimal weights are listed in Table
XXIV. The arrays for this case are similar to the results for Case 1 except they are
spread out farther, which is expected for a narrower mainbeam. The results for N=6
differ significantly between the two cases, indicating that the results can have significant
variance depending on the beamwidth. The results for Case 2 for N=6 and N=7 are
almost identical, the difference being the center element. Note that the addition of this
120
element only lowers the sidelobe level by 0.1 dB. This information would be
advantageous to an array designer in determining the number of elements needed to
achieve a sidelobe level. The extra complexity introduced by adding a seventh element
in this case would not be very beneficial. The magnitude of the array factor at distinct
azimuthal angles is plotted in Figure 38 for the N=7 array. The mainbeam is again
identical when ° ≤ 15 θ for distinct azimuth angles, indicating a circular mainbeam.
Figure 37. Optimal symmetric array locations for Case 2 (dimensions in λ ).
121
TABLE XXIII
OPTIMAL SLL AND POSITIONS FOR CASE 2 (DIMENSIONS IN λ )
) , (
1 1
y x ) , (
2 2
y x
) , (
3 3
y x
) , (
4 4
y x
SLL (dB)
N=4 (0.49, 0) (0, 0.70) 1.9
N=5 (0, 0) (0.79, 0) (0, 0.79) 3.2
N=6 (0.92, 0) (0.46, 0.80) (0.46, 0.80) 5.7
N=7 (0, 0) (0.92, 0) (0.46, 0.80) (0.46, 0.80) 5.8
TABLE XXIV
OPTIMAL WEIGHTS FOR CASE 2
1
w
2
w
3
w
4
w
N=4 0.661 0.339
N=5 0.268 0.183 0.183
N=6 0.442 0.159 0.400
N=7 0.200 0.133 0.133 0.133
Figure 38. Magnitude of ) (θ T at distinct azimuthal angles (Case 2), N=7.
122
7.6. Symmetric Arrays of Patch Antennas
In this section, symmetric arrays of patch antennas steered to 0 =
d
θ are
considered. The method of solution is identical to that in Section 7.3. When a microstrip
or patch antenna has a thin dielectric, the far field components of the electric field are
approximately given by (7.15) and (7.16), when the polar angle 2 / π θ ≤ [91]. For
2 / π θ > , which is the region below the patch, the radiated fields will assumed to be zero.
In (7.97.10), k is the freespace wavenumber, W is the width of the patch, and L is the
length of the patch.
φ φ θ
φ θ
φ θ
θ
cos cos sin
2
cos
2
sin sin
2
sin sin
sin
0

¹

\


¹

\

=
kL
kW
kW
E E (7.15)
φ θ φ θ
φ θ
φ θ
ϕ
sin cos cos sin
2
cos
2
sin sin
2
sin sin
sin
0

¹

\


¹

\

− =
kL
kW
kW
E E (7.16)
The normalized pattern to be used for ) , ( φ θ f as in (7.1) will be
2
0
2
0
) , (


¹

\

+


¹

\

=
E
E
E
E
f
φ
θ
φ θ (7.17)
In this section, the patch dimensions are chosen to be W=L=0.5λ . The directivity
for this antenna can be numerically calculated to be 9.34 dB. The patch pattern is plotted
in Figure 39 for two fixed azimuth angles. The pattern here is complicated enough that it
is highly unlikely that an analytical weighting method can be developed that minimizes
the sidelobes in an patch array. However, using the development of Section 7.2, adding
123
the pattern does not significantly increase the difficulty of determining the optimal
weights. Using a realistic, complicated antenna pattern helps to highlight the utility of
the LP method of weight selection.
(a) ° = 0 φ
(b) ° = 90 φ
Figure 39. Magnitude of patch pattern (in dB).
The two cases discussed in Section 7.3 are again considered here. Using the
solution method discussed previously, the optimal arrays for Case 1 (beamwidth equal to
° 60 ) are presented in Figure 40. The optimal arrays of patch antennas are skewed
124
somewhat owing to the nonisotropy of the antenna pattern. The results for N=4, 5 and 7
are fairly similar to the Case 1 results of omnidirectional elements; however they are
slightly rotated and more spread out. The arrays are more spread out (or elongated)
because the array factor effectively has a narrower mainbeamwidth (due to the patch
pattern which decreases in magnitude for ° > 0 θ ). The narrower mainbeam leads to
more spread out arrays, as seen in Case 2 of Section 7.3. The result for N=6 is a
significant departure from the omnidirectional case, which indicates that the antenna
pattern must be taken into account in determining an optimal geometry.
The positions and optimal sidelobe levels for Case 1 of patch elements are listed in
Table XXV. The corresponding optimal weights are listed in Table XXVI. The
magnitude of the radiation pattern is plotted in Figure 41 as a function of θ for three
distinct azimuth angles for the N=7 array. The directivity of this array is evaluated
numerically to be 17.67 dB.
125
Figure 40. Optimal symmetric patch array locations for Case 1 (units of λ ).
TABLE XXV
OPTIMAL SLL AND POSITIONS FOR CASE 1 OF PATCH ELEMENTS (UNITS OF
λ )
) , (
1 1
y x ) , (
2 2
y x
) , (
3 3
y x
) , (
4 4
y x
SLL (dB)
N=4 (0.46,0.42) (0.49, 0.42) 12.5
N=5 (0.00, 0.00) (0.61, 0.57) (0.65, .53) 13.1
N=6 (0.34, 0.68) (0.32, .69) (.54, .01) 16.6
N=7 (0.00, 0.00) (0.50, 0.74) (0.85, .04) (.37, .77) 21.5
126
TABLE XXVI
OPTIMAL WEIGHTS FOR CASE 1 WITH PATCH ELEMENTS
1
w
2
w
3
w
4
w
N=4 0.256 0.244
N=5 0.233 0.193 0.191
N=6 0.133 0.133 0.234
N=7 0.218 0.124 0.132 0.135
Figure 41. Magnitude of ) (θ T at distinct azimuth angles (Case 1), N=7 (patch).
Next, Case 2 (beamwidth equal to ° 30 ) is considered with patch elements. The
optimal arrays are plotted in Figure 42. The arrays for this case are fairly similar to the
results of Case 1, except for again being spread out farther. The result for N=6 is more
symmetric and less football shaped, while the result for N=7 is again a hexagonally
sampled array.
127
The optimal positions along with the sidelobe levels are listed in Table XXVII. The
sidelobe level increased on average by 7.5 dB when compared to Case 1 of Section 7.4.
The optimal weights are given in Table XXVIII. The magnitude of the radiation pattern
is plotted as a function of θ for distinct azimuth angles in Figure 43 for the N=7 array.
The directivity of this array is evaluated numerically to be 17.72 dB.
Figure 42. Optimal symmetric patch array locations for Case 2 (units of λ ).
128
TABLE XXVII
OPTIMAL SLL AND POSITIONS FOR CASE 2 OF PATCH ELEMENTS (UNITS OF
λ )
) , (
1 1
y x ) , (
2 2
y x
) , (
3 3
y x
) , (
4 4
y x
SLL (dB)
N=4 (0.64, 0.61) (0.77, 0.61) 5.7
N=5 (0.00, 0.00) (1.27, 0.11) (0.11, 1.12) 7.5
N=6 (1.05, 0.05) (0.58, 0.94) (0.51, 0.99) 9.6
N=7 (0.00, 0.00) (0.60, 1.02) (0.63, 1.01) (1.20, 0.01) 10.7
TABLE XXVIII
OPTIMAL WEIGHTS FOR CASE 2 WITH PATCH ELEMENTS
1
w
2
w
3
w
4
w
N=4 0.272 0.228
N=5 0.194 0.184 0.219
N=6 0.178 0.163 0.159
N=7 0.061 0.163 0.161 0.145
Figure 43. Magnitude of ) (θ T at distinct azimuth angles (Case 2), N=7 (patch).
129
7.7. Wideband Weighting Method
The previous efforts have been focused on optimizing an antenna array for
minimizing sidelobes at a single frequency. In practice, arrays transmit and receive
information over a range of frequencies, generally centered about some carrier frequency.
An array that has low sidelobes at a given frequency is not guaranteed to have low
sidelobes at other frequencies within the band the array is operating in. In this section, a
method of choosing weights that yield the minimum sidelobes over a range of
frequencies is developed.
The minimum sidelobe level in a wideband array is written as
 ) , , , ( 
) , , (
max λ
λ
y x
y x
k k T
k k
SLL d w,
Θ ∈
= , (7.18)
where again Θ is the region in which the sidelobes are to be suppressed. It will be
assumed that the sidelobes are to be suppressed within a continuous frequency range, and
in the region given by
c
θ θ ≥ for all frequencies. This interval will be written as
(
¸
(
¸
=
U L
U L
c c
f f
λ λ
, ] , [ , (7.19)
where
U L
f f < .
The weighting method used in Section 7.3 could be extended such that every
constraint is duplicated at every frequency; this was the method proposed in [92].
However, to do so, particularly for 2D arrays or for very wideband arrays, would add an
intractable number of constraints. While this would be technically correct, the
optimization would often be computationally intractable. Hence, a more efficient method
is developed.
130
Defining
¦
)
¦
`
¹
¦
¹
¦
´
¦


¹

\

≤ + ≤


¹

\

= Θ
2
2 2
2
2 sin 2
: ) , ( ) (
i
y x
i
c
y x i
k k k k
λ
π
λ
θ π
λ , (7.20)
it follows that
U
i
i
λ
λ ) ( Θ = Θ , (7.21)
where the union is over all wavelengths within [
L U
λ λ , ].
For the wideband case, the total radiation pattern is written as the product of the
element pattern and the array factor,
) , , , ( ) , , ( ) , , , ( λ λ λ
y x y x y x
k k AF k k f k k T d w, d w, = . (7.22)
The twodimensional array factor is rewritten from (2.21) with the wavevector definitions
in (2.7) as
∑
=
+ −
=
N
n
y k x k j
e w k k AF
n y n x
n y x
1
) (
) , , ( d w, . (7.23)
Equation (7.23) indicates that the array factor does not depend on λ when
x
k and
y
k are
specified.
This is not true for the element pattern, ) , , ( λ
y x
k k f , which in general will not be
independent of λ when
x
k and
y
k are specified. This can be seen from the patch
element pattern of Section 7.3 given in (7.157.17), which cannot be written as only a
function of two variables.
Most antennas will exhibit a notable change in radiation pattern over the band of
operation. In this case, the variation of the antenna pattern with frequency should be
131
taken into account. To accomplish this, an auxiliary antenna pattern, ) , (
y x
k k H is
defined as
) , , (
] , [
max ) , ( λ
λ λ λ
y x
L U
y x
k k f k k H
∈
= . (7.24)
This auxiliary antenna pattern is the maximum value of the antenna pattern evaluated at
) , (
y x
k k over the frequency range of interest. Note that the maximum in (7.24) is taken
only over the frequency range for which ) , (
y x
k k is in the visible region. For instance, at
the value ) 0 , / 2 ( ) , (
U y x
k k λ π = , the only frequency that has this value in the visible
region is
U
f f = . For the narrowband or single frequency case, the auxiliary antenna
pattern reduces to the antenna pattern at the frequency of interest.
Using (7.22) and (7.24), it follows that
 ) , , (  ) , (  ) , , , ( 
y x y x y x
k k AF k k H k k T d w, d w, ≤ λ . (7.25)
Hence, the total radiation pattern as a function of frequency can be minimized by
minimizing the right hand side of (7.19), which is only a function of
x
k and
y
k . The
minimization is performed over the region specified in (7.21). Letting
L
c
cL
k
λ
θ π sin 2
= , (7.26)
the suppression region for the wideband case can be illustrated graphically, as shown in
Figure 44. The wideband case is equivalent to minimizing the sidelobes in the ) , (
y x
k k
plane beginning at the cutoff value for the lowest frequency ) (
cL
k , and extending the
region to the largest wavenumber in the visible region at the highest frequency ) / 2 (
U
λ π .
132
Figure 44. Suppression region for twodimensional arrays over a frequency band.
Writing
) , , ( ) , , ( ) , (
y x y x y x
k k G k k AF k k H d w, d w, = , (7.27)
the singlefrequency sidelobe minimizing optimization problem of (7.5) is rewritten for
the wideband case in (7.28).
R i t k k G
G t s
t
yi xi
, 2, , 1 ,  ) , , ( 
1 ) 0 , 0 , ( . .
min
K = ≤
=
D w,
D w, (7.28)
In (7.22), the R samples are sampled over the suppression region illustrated in Figure 44.
The solution to (7.22) yields weights that produce the minimum sidelobe level over the
frequency band of interest.
133
When the antenna elements do not have significantly different radiation patterns
over the frequency band of interest, the radiation pattern can be approximated,
) , , ( ) , , (
j i
f f λ φ θ λ φ θ ≈ , (7.29)
for all
j i
λ λ , within the frequency band. For this case, determining the wideband weights
is as simple as extending the suppression region as in Figure 37 and using the procedure
of Section 7.2.
7.8. Optimal Wideband Arrays of Omnidirectional Antennas
In this section, the arrays are optimized to determine the minimum sidelobe level
over a range of frequencies. The frequency range will be specified by the fractional
bandwidth (FBW),
2 / ) (
L U
L U
c
L U
c
f f
f f
f
f f
f
f
FBW
+
−
=
−
=
∆
= , (7.30)
where
c
f is the center frequency. Fractional bandwidths are considered wideband when
0.2<FBW<0.5, and are considered ultrawideband when 5 . 0 ≥ FBW [93]. In this
section, an ultrawideband case is considered in which FBW=0.50. The antenna elements
are omnidirectional and independent of frequency over the frequency range of interest, so
that
1 ) , , ( = λ φ θ f . (7.31)
When the antenna’s radiation pattern is independent of frequency over the bandwidth of
interest, the antennas are referred to as frequency independent. Hence, the optimization
in this section will focus on the array factor, which is equivalent to the total radiation
134
pattern for this case. For comparison with the results of Section 7.4, the beamwidth will
be ° 60 , so that the sidelobes will be suppressed when ° ≥ 30 θ for all frequencies.
The optimization procedures that were applied in the previous sections of this
chapter are again sufficient for the problem at hand. The resulting optimal arrays are
found for N=47 and are presented in Figure 45. Note that the results are now given in
units of
c c
f c / = λ . The optimal positions are also tabulated in Table XXIX. The
corresponding optimal weights are listed in Table XXX.
Figure 45. Optimal symmetric array locations for FBW=0.5 (units of
c
λ ).
135
TABLE XXIX
OPTIMAL SLL AND POSITIONS FOR OMNIDIRECTIONAL ELEMENTS (UNITS
OF
c
λ , FBW=0.5)
) , (
1 1
y x ) , (
2 2
y x
) , (
3 3
y x
) , (
4 4
y x
SLL (dB)
N=4 (0.51, 0) (0, 0.39) 2.4
N=5 (0.0, 0.0) (0.44, .44) (0.44, 0.44) 3.9
N=6 (0.65, 0.0) (0.33, 0.56) (0.33, 0.56) 6.0
N=7 (0.0, 0.0) (0.71, 0) (0.36, .61) (.36, 0.61) 7.2
TABLE XXX
OPTIMAL WEIGHTS FOR OMNIDIRECTIONAL ELEMENTS (FBW=0.5)
1
w
2
w
3
w
4
w
N=4 0.189 0.311
N=5 0.183 0.204 0.204
N=6 0.169 0.169 0.169
N=7 0.043 0.160 0.160 0.160
The results are interesting when compared with the narrowband results of Section
7.4. The SLL increased on average of 3.6 dB when the array is designed to perform in
this ultrawideband situation. The arrays are slightly less spread out as in the narrowband
case. The result for N=6 is a circular array in the wideband case, whereas it was a cross
shape for the narrowband case. In addition, in extending the array from narrowband to
ultrawideband, the SLL increased by only 1.9 dB for N=6. This is not a large penalty in
SLL for greatly extending the bandwidth. However, the SLL increase was 6.7 dB for
N=7, which is relatively large.
The total radiation pattern for the optimal array of size N=7 is now presented.
Since it is now a function of frequency, the pattern will be plotted as a function of θ for
distinct azimuth angles at the lower frequency (
L
f , given in Figure 46), the center
136
frequency (
c
f , given in Figure 47), and the upper frequency (
U
f , given in Figure 48).
Note that the beamwidth varies depending on the frequency. However, for all
frequencies in the range of interest, the beamwidth is less than ° 60 and the sidelobes
never rise above the SLL (7.2 dB) in the suppression region, as desired.
Figure 46. Magnitude of ) (θ T at distinct azimuth angles (N=7) for
L
f f = .
137
Figure 47. Magnitude of ) (θ T at distinct azimuth angles (N=7) for
c
f f = .
138
Figure 48. Magnitude of ) (θ T at distinct azimuth angles (N=7) for
U
f f = .
7.9. Optimal Wideband Arrays of Patch Antennas
In this section, wideband arrays of patch antennas are examined. The bandwidth is
selected to be FBW=0.2, which is much smaller than the ultrawideband case of Section
7.7 but still wideband enough that the narrowband assumption is not valid. Patch
antennas have radiation patterns that vary significantly with frequency, but can be made
to have a wider bandwidth using various methods including adding slits [94] or adding a
Uslot to the patch [95]. The beamwidth will again be ° 60 for comparison with the
results of Section 7.5.
The normalized field patterns for the patch, given in (7.97.11) will now be
rewritten as a function of frequency in (7.327.33).
φ φ θ
λ
π
λ
φ θ π
λ
φ θ π
λ
θ
cos cos sin cos
sin sin
sin sin
sin
) (
0

¹

\


¹

\

=
L
W
W
E E (7.32)
φ θ φ θ
λ
π
λ
φ θ π
λ
φ θ π
λ
ϕ
sin cos cos sin cos
sin sin
sin sin
sin
) (
0

¹

\


¹

\

− =
L
W
W
E E (7.33)
The normalized pattern to be used for ) , ( φ θ f as in (7.1) will be
2
0
2
0
) (
) (
) , , (


¹

\

+


¹

\

=
E
E
E
E
f
λ
λ
λ φ θ
φ
θ
. (7.34)
The implicit assumption in (7.327.34) is that
0
E is approximately constant over the
frequency range of interest.
139
The PSO algorithm is again applied to determine the optimal positions. The
resulting optimal arrays are found for N=47 and are presented in Figure 49. The results
are again given in units of
c c
f c / = λ . The optimal positions are also tabulated in Table
XXXI. The corresponding optimal weights are listed in Table XXXII.
Figure 49. Optimal symmetric patch array locations for FBW=0.2 (units of
C
λ ).
The arrays are similar to the narrowband case of patch elements with the same
bandwidth given in Figure 33. The seven element array is again approximately
140
hexagonal, which has been a recurring theme throughout this work. The arrays for this
case appear to be spread out further than the narrowband case, when measured in units of
the center wavelength.
On average, the SLL increased by 1.9 dB in order to guarantee the sidelobe level
over the frequency range of operation. The N=5 element array exhibited the lowest rise
in sidelobes (only 1.1 dB) by extending the bandwidth of the array. The N=7 element
array exhibited the highest rise in sidelobes (3.0 dB) in order to extend the bandwidth.
TABLE XXXI
OPTIMAL SLL AND POSITIONS FOR PATCH ELMENTS (UNITS OF
c
λ ,
FBW=0.2)
) , (
1 1
y x ) , (
2 2
y x
) , (
3 3
y x
) , (
4 4
y x
SLL (dB)
N=4 (0.53, 0.00) (.01, 0.87) 10.8
N=5 (0.0, 0.0) (0.99, 0.07) (0.10, 0.84) 12.0
N=6 (0.56, 0.30) (0.50, 0.49) (0.11, 0.90) 14.7
N=7 (0.0, 0.0) (0.45, 0.76) (0.49, 0.75) (0.91, 0.01) 18.5
TABLE XXXII
OPTIMAL WEIGHTS FOR PATCH ELEMENTS (FBW=0.2)
1
w
2
w
3
w
4
w
N=4 0.320 0.180
N=5 0.268 0.169 0.197
N=6 0.215 0.185 0.100
N=7 0.188 0.142 0.136 0.128
Finally, the total radiation patterns for the optimal N=7 arrays are again plotted at
the lower, center and upper frequencies for fixed elevation angles. The radiation pattern
at the lower frequency (
L
f f = ) is plotted in Figure 50, the center frequency (
c
f f = )
radiation pattern is given in Figure 51, and the upper frequency (
U
f f = ) radiation
141
pattern is plotted in Figure 52. As seen in the ultrawideband case, the beamwidth again
varies depending on the frequency. However, the variance is less pronounced in this case
because of the lower fractional bandwidth considered.
Figure 50. Magnitude of ) (θ T at distinct azimuth angles (N=7) for
L
f f = .
142
Figure 51. Magnitude of ) (θ T at distinct azimuth angles (N=7) for
c
f f = .
143
Figure 52. Magnitude of ) (θ T at distinct azimuth angles (N=7) for
U
f f = .
7.10. Conclusions
The sidelobe minimization technique was extended from 1D to 2D in this chapter.
It was seen that the optimal geometry varies depending on the beamwidth and antenna
elements used in the array.
The arrays were then studied over a wide frequency range. Optimal weights were
derived that minimize the sidelobe level over a range of frequencies. The optimal
geometries and weights were found to vary with the fractional bandwidth used by the
array. Hence, if an array is used over a frequency range, the narrowband optimization
technique will not be optimal.
The PSO method was again effective in optimizing the array geometry. This
optimization technique has proven to work well for a wide variety of problems. In
144
addition, the method lends itself well to be employed using parallel processing, which
can significantly speed up computation time.
VIII. SUMMARY, CONCLUSIONS, AND FUTURE WORK
8.1. Summary and Conclusions
The primary goal of this dissertation has been to show the effect of an array’s
geometry on metrics of interest. These metrics include sidelobe level, interference
rejection and SINR. Because of the difficulty in analyzing an array’s geometry, the
geometry is often chosen to be a standard geometry in practice. However, this work has
shown that gain in performance can be achieved via suitable optimization of the array
geometry.
The secondary goal has been to improve upon existing weightselection strategies
via convex optimization. The relatively new field of convex optimization will likely be a
tremendously effective tool as it makes its way into the antenna field.
In Chapter 5, improving the performance of an adaptive array was considered.
Because of the wide range of environments in which these arrays operate, the concept of
an interference environment was introduced. In this manner, the performance can be
optimized on average over the likely scenarios in which the array is to perform. This was
a necessary development, as optimizing over a specific situation would not have been
extremely useful for an adaptive array. The interferencesuppression capability of
adaptive arrays was shown to vary with geometry, and consequently, the optimization of
the geometry was of interest. It was shown that the interference allowed through the
spatialfilter (that is the antenna array) can be lowered by varying the geometry. In
addition, this lowering of interference power was then shown to often translate into gains
in overall SINR, a critical parameter in wireless communication.
146
Sidelobe level was shown to be critical in WCDMA systems in [87]. In Chapter 6,
the process of determining the minimum possible SLL in a linear array was developed.
Sidelobeminimizing weights were derived that can be efficiently computed for any
linear geometry, beamwidth, scan angle, and antenna type. The DolphChebyshev
sidelobeminimizing weighting method was derived in 1946 [21] and has been used
extensively since its publication. The derivation of the weights presented in Chapter 6
was a significant expansion of the capabilities of that method. The total radiation pattern
depends on both the weights, positions and elements in the array. Since the optimal
weights can be found for any array geometry and any antenna type, the only variables
remaining were the array positions. The PSO algorithm was employed to determine
optimal positions, that along with the optimal weights, determined the optimal sidelobe
levels for linear arrays of size N=27. Results were presented for linear arrays steered to
broadside and ° 45 , and for two different beamwidths. In addition, arrays of
omnidirectional elements and short dipoles were examined to show the effect of the
antenna’s radiation pattern on the optimal geometry, weights and sidelobe level.
In Chapter 7, the optimal sidelobe level for 2D or planar arrays was considered.
The methods for minimizing sidelobes in linear arrays were extended to the planar case.
Optimal symmetric planar arrays of size N=47 were found for two beamwidths, and for
arrays with either omnidirectional or patch antenna elements. In addition, a method of
minimizing sidelobes in wideband planar arrays was developed. Sidelobe minimizing
weights were derived that suppress the sidelobes over a range of frequencies, instead of at
a single frequency as is done in the narrowband case. This weighting method is valid for
147
arbitrary bandwidths, beamwidths, antenna types, and planar array geometries. The
positions were optimized simultaneously with the weights to determine optimal sidelobe
levels for wideband arrays. Results were presented for an ultrawideband case of
omnidirectional elements, and for a wideband case of patch antenna elements.
Throughout this work, the hexagonal array has been a recurring optimal two
dimensional array. For interference suppression, the optimal 7element array was a
closely spaced hexagonal array. For sidelobe suppression in the planar case, the
hexagonal array arose as the solution for distinct element types and beamwidths. Hence,
when using an array with a number of elements that fits well with the hexagonal
structure, it is likely that this geometry would be a good starting point.
As the traffic in wireless communication increases, every variable that can be
exploited to improve performance will be optimized. Since the demand for higher data
rates and reliability for a given bandwidth continues to grow exponentially, it is likely
array geometry optimization will be employed in real systems.
8.2. Future Work
There is no shortage of applications in which array geometry optimization would
prove useful. The obvious next steps would be to continue the work of this dissertation
for arrays with a larger number of elements. The minimum sidelobeproducing antenna
arrays for one and two dimensions could be studied for increasing number of elements to
determine the characteristics of the optimal arrays as the number of elements becomes
large. The same extensions could be done to the interferencesuppressing adaptive
arrays.
148
Another topic of interest would be to optimize the weights and geometries of
antennas consisting of nonidentical elements. Antenna array analysis is almost
exclusively performed with identical elements, and it would be interesting to observe if
gains could be made by exploiting elements with different radiation patterns.
Another interesting practical problem would be to minimize crosspolarization in
antenna arrays while holding a certain criteria constant (SLL, MSE, etc.). This problem
could likely be solved in a similar manner to the solution methods of Chapters 6 and 7.
Implementing precise weights can sometimes be difficult in actual systems. Hence,
deriving an optimization problem that returns weights from a discrete set of allowable
weights would be advantageous. Then optimizing over the positions to determine an
optimal geometry for the discretized weights could be performed.
The geometryoptimization in this work has focused on translating the elements.
For nonomnidirectional elements, the array could be optimized by allowing the elements
to rotate or be put at an angle relative to the other elements. This would add new degrees
of freedom to each element, which could translate to potentially large gains in
performance.
On the theoretical side, optimization methods for proving an array’s geometry is
globally optimal would be of value. This has not been done due to the mathematical
intractability of the problem (many locally optimal points). However, as the field of
optimization expands, it is possible that a clever technique could be developed to verify
that an array is globally optimal.
149
Finally, in digital communications, the bit error rate (BER) for a given data rate is
the definitive measure of performance for a wireless communication system. Hence,
more general modeling methods that ultimately minimize the BER would be valuable.
However, because of the large and complex nature of wireless communication systems,
this would not be an easy task.
REFERENCES
[1] L. Coe, Wireless Radio: A History. New York: McFarland & Company, 2006.
[2] L. W. Alvarez, Alvarez: Adventures of a Physicist. New York: Basic Books, 1987.
[3] H. Unz, “Linear arrays with arbitrarily distributed elements,” IEEE Trans. Antennas
Propag., vol. 8, pp. 222223, Mar. 1960.
[4] D. King, R. Packard and R. Thomas, “Unequally spaced, broadband antenna arrays,”
IEEE Trans. Antennas Propag., vol. 8, pp. 380384, Jul. 1960.
[5] R. Harrington, “Sidelobe Reduction by Nonuniform Element Spacing,” IEEE Trans.
Antennas and Propag., vol. 9, pp. 187192, Mar. 1961.
[6] M. Skolnik, G. Nemhauser, and J. Sherman, “Dynamic programming applied to
unequally spaced arrays,” IEEE Trans. Antennas and Propag., vol. 12, pp. 3543, Jan.
1964.
[7] M. Skolnik, J. Sherman and F. Ogg, Jr., “Statistically designed densitytapered
arrays,” IEEE Trans. Antennas and Propag., vol. 12, pp. 408417, July 1964.
[8] W. L. Stutzman, “Shapedbeam synthesis of nonuniformly spaced linear arrays,”
IEEE Trans. Antennas Propagat., vol. AP20, pp. 499501, July 1972.
[9] S. U. Pillai, Y. BarNess, and F. Haber, “A new approach to array geometry for
improved spatial spectrum estimation,” IEEE Proc., vol. 73, pp. 15221524, Oct. 1985.
[10] M. Gavish and A. J. Weiss, “Array geometry for ambiguity resolution in direction
finding,” IEEE Trans. Antennas and Propag., vol. 44, pp. 889895, June 1996.
[11] C. W. Ang, C. M. See, and A. C. Kot, “Optimization of array geometry for
identifiable high resolution parameter estimation in sensor array signal processing,” in
Proc. Int. Conf. Inf., Commun. Signal Process., Singapore, Sep. 1997, pp. 16131617.
[12] A. Goldsmith, Wireless Communications. New York: Cambridge University Press,
2005.
[13] B. P. Kumar and G. R. Branner, “Design of unequally spaced arrays for
performance improvement,” IEEE Trans. Antennas Propag., vol. 47, pp. 511523, Mar.
1999.
[14] B. P. Kumar and G. R. Branner, “Generalized Analytical Technique for the
synthesis of unequally spaced arrays with linear, planar, cylindrical or spherical
geometry,” IEEE Trans. Antennas Propag., vol. 53, pp. 621633, Feb. 2005.
151
[15] P. Jarske, T. Sramaki, S. K. Mitra, and Y. Neuvo, “On the properties and design of
nonuniformly spaced linear arrays,” IEEE Trans. Acoust., Speech, Signal Processing, vol.
36, pp. 372380, Mar. 1988.
[16] T. H. Ismail and M. M. Dawoud, “Null steering in phased arrays by controlling the
element positions,” IEEE Trans. Antennas Propag., vol. 39, pp. 15611566, Nov. 1991.
[17] M. M. Khodier and C. G. Christodoulou, “Sidelobe Level and Null Control Using
Particle Swarm Optimization,” IEEE Trans. Antennas Propag., vol. 53, pp. 26742679,
Aug. 2005.
[18] N. Petrella, et. al., “Planar array synthesis with minimum sidelobe level and null
control using particle swarm optimization,” Int. Conf. Microwaves, Radar, Wireless
Comm., pp. 10871090, May, 2006.
[19] A. Tennant, M. M. Dawoud, and A. P. Anderson, “Array pattern nulling by element
position perturbations using a genetic algorithm,” Electron. Lett., vol. 30, no. 3, pp. 174
176, Feb. 1994.
[20] O. QuevedoTeruel and E. RajoIglesias, “Ant colony optimization in thinned array
synthesis with minimum sidelobe level,” IEEE Trans. Antennas Propag. Letters, vol. 5,
pp. 349352, 2006.
[21] C. L. Dolph, “A current distribution for broadside arrays which optimizes the
relationship between beamwidth and sidelobe level,” Proc. IRE, vol. 34, pp. 335348,
June 1946.
[22] C. A. Balanis, Antenna Theory: Analysis and Design, 3rd ed. New York: Wiley,
2005.
[23] H. J. Riblet, “Discussion of Dolph’s paper,” Proc. IRE, vol. 35, pp. 489492, May
1947.
[24] J. G. Proakis and D. G. Manolakis, Digital Signal Processing, 3
rd
Ed. New Jersey:
Prentice Hall, 1996.
[25] R. H. DuHamel, “Optimum patterns for endfire arrays,” Proc. IRE, vol. 41, pp. 652
659.
[26] G. Sinclair and F. V. Cairns, “Optimum patterns for arrays of nonisotropic
sources,” Trans. IRE, PGAP1, pp. 5061, Feb. 1952.
[27] S. Holm and B. Elgetun, “Optimization of the beampattern of 2D sparse arrays by
weighting,” Proc. IEEE Ultrasonics Symp., Cannes, France, 1995.
152
[28] V. Murino, A. Trucco and C. S. Regazzoni, “Synthesis of unequally spaced arrays
by simulated annealing,” IEEE Trans. Signal Processing, vol. 44, pp. 119123, Jan. 1996.
[29] B. Widrow, P. E. Mantey, and L. J. Griffiths, “Adaptive antenna systems,” Proc.
IEEE, vol. 55, pp. 21432159, Dec. 1967.
[30] P. Bevelacqua and C. A. Balanis, “Optimizing antenna array geometry for
interference suppression”, IEEE Trans. Antennas Propag., vol. 53, pp. 637641, Mar.
2007.
[31] A. A. Abouda, H. M. ElSallabi and S. G. Haggman, “Effect of antenna array
geometry and ULA azimuthal orientation on MIMO channel properties in urban city
street grid,” Progress in Electromagnetics Research, PIER 64, pp. 257278, 2006.
[32] X. Li and Z. Nie, “Effect of array orientation on performance of MIMO wireless
channels,” IEEE Antennas Propag. Letters, vol. 3, pp. 368372, 2004.
[33] F. T. Ulaby, Fundamentals of Applied Electromagnetics. New Jersey: Prentice
Hall, 2001.
[34] C. A. Balanis, Advanced Engineering Electromagnetics. New York: Wiley, 1989.
[35] W. L. Stutzman and G. A. Thiele, Antenna Theory and Design, 2nd Ed. New York:
Wiley, 1998.
[36] J. D. Kraus and R. J. Marhefka, Antennas, 2nd Ed. New York: McGrawHill,
2001.
[37] M. Skolnik, Introduction to Radar Systems. New York: McGrawHill, 2001.
[38] B. Sklar, Digital Communications: Fundamentals and Applications, 2nd Ed. New
Jersey: Prentice Hall, 2001.
[39] F. Alam, “Space time processing for third generation CDMA systems,” Ph. D.
dissertation, Virginia Inst. Technol., Blacksburg, Nov. 2002.
[40] L. C. Godara and A. Cantoni, “Uniqueness and linear independence of steering
vectors in array space,” J. Acoust. Soc. Amer., vol. 70, no. 2, pp. 467475, Aug. 1981.
[41] K. Tan, K. Ho and A. Nehorai, “Uniqueness study of measurements obtainable with
arrays of electromagnetic vector sensors,” IEEE Trans. Signal Proc., vol. 44, pp. 1036
1039, Apr. 1996.
153
[42] S. A. Schelkunoff, “A mathematical theory of linear arrays,” Bell System Technical
Journal, vol. 22, pp. 80107, 1943.
[43] A. D. Bresler, “A new algorithm for calculating the current distributions of Dolph
Chebyshev arrays,” IEEE Trans. Antennas Propag., vol. AP28, pp. 951952, Nov. 1980.
[44] N. Balakrishnan and R. Sethuraman, “Easy generation of DolphChebyshev
excitation coefficients,” Proc. IEEE, vol. 69, pp. 15081509, Nov. 1981.
[45] H. L. Van Trees, Optimum Array Processing (Detection, Estimation and
Modulation Theory, Part IV). New York: Wiley, 2002.
[46] B. Widrow and M. E. Hoff, Jr., “Adaptive switching circuits,” Proc. IRE WESCON
Conf. Rec., part 4, 1960, pp. 96104.
[47] S. Applebaum, “Adaptive Arrays,” IEEE Trans. Antennas Propag., vol. 24, pp. 585
598, Sep. 1976.
[48] S. Haykin, Adaptive Filter Theory. New Jersey: Prentice Hall, 1996.
[49] O. Macchi, Adaptive Processing: The LMS Approach with Applications in
Transmission. New York: Wiley, 1995.
[50] J. C. Maxwell, A Treatise on Electricity and Magnetism, Vol. 1 and 2. Oxford:
Clarendon Press, 1873.
[51] A. F. Peterson, S. L. Ray and R. Mittra, Computational Methods for
Electromagnetics. New York: WileyIEEE Press, 1997.
[52] K. S. Yee, “Numerical solution of initial boundary value problems involving
Maxwell’s equations in isotropic media,” IEEE Trans. Antennas Propag. Soc. Int.
Symposium, vol. 14, no. 3, pp. 302307, 1966.
[53] R. F. Harrington, Field Computation by Moment Methods. New York: WileyIEEE
Press, 1993.
[54] A. Hoorfar, “Evolutionary programming in electromagnetic optimization: a
review,” IEEE Trans. Antennas Propag., vol. 55, pp. 523537, Mar. 2007.
[55] R. L. Haupt, “Antenna Design with a mixed integer genetic algorithm,” IEEE
Trans. Antennas Propag., vol. 55, pp. 577582, Mar. 2007.
154
[56] N. Jin and Y. RahmatSamii, “Parallel particle swarm optimization and finite
difference timedomain (PSO/FDTD) algorithm for multiband and wideband patch
antenna designs,” IEEE Trans. Antennas Propag., vol. 53, pp. 34593468, Nov. 2005.
[57] J. M. Johnson and Y. RahmatSamii, “Genetic algorithms and method of moments
(GA/MOM) for the design of integrated antennas,” IEEE Trans. Antennas Propag., vol.
47, pp. 16061614, Oct. 1999.
[58] A. Schriver, Theory of Linear and Integer Programming. New York: Wiley, 1998.
[59] G. B. Dantzig and M. N. Thapa, Linear Programming 1: Introduction. New York:
Springer, 1997.
[60] S. G. Nash and A. Sofer, Linear and Nonlinear Programming. New York:
McGrawHill, 1996.
[61] N. K. Karmarkar, “A new polynomialtime algorithm for linear programming,”
Combinatorica, vol. 4, pp. 373395, 1984.
[62] H. Park and F. Park, “Convex optimization algorithms for active balancing of
humanoid robots,” IEEE Trans. Robotics, vol. 23, pp. 817822, Aug. 2007.
[63] Z. Q. Luo and W. Yu, “An introduction to convex optimization for communications
and signal processing,” IEEE J. Sel. Areas Commun., vol. 24, pp. 14261438, Aug. 2006.
[64] K. S. Ni and T. Q. Nguyen, “Image superresolution using support vector
regression,” IEEE Trans. Image Process., vol. 16, pp. 15961610, June 2007.
[65] T. Zhang, “Sequential greedy approximation for certain convex optimization
problems,” IEEE Trans. Infor. Theory, vol. 49, pp. 682691, Mar. 2003.
[66] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge, U. K.:
Cambridge University Press, 2004.
[67] Y. Nesterov and A. Nemirovskii, InteriorPoint Polynomial Algorithms in Convex
Programming. Philadelphia, PA: SIAM, 1994.
[68] P. J. M. van Laarhoven and E. H. L. Aarts, Simulated Annealing: Theory and
Applications. Dordrecht, Holland: D. Reidel, 1987.
[69] S. Kirkpatrick, C. D. Gelatt, Jr. and M. P. Vecchi, “Optimization by Simulated
Annealing”, Science, vol. 220, pp. 671680, May 1983.
155
[70] B. Hajeck, “Cooling schedules for optimal annealing,” Math. Oper. Res., vol. 13,
no. 2, pp. 311329, 1988.
[71] J. Robinson, S. Sinton, and Y. RahmatSamii, “Particle swarm, genetic algorithm,
and their hybrids: optimization of a profiled corrugated horn antenna,” IEEE
International Symposium on Antennas & Propagation. San Antonio, Texas. June, 2002.
[72] T. B. Chen, Y. B. Chen, Y. C. Jiao and F. S. Zhang, “Synthesis of antenna array
using particle swarm optimization,” AsiaPacific Microwave Conference Proceedings,
vol. 3, 2005.
[73] J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in Proc. IEEE Conf.
Neural Networks IV, Piscataway, NJ, 1995.
[74] R. C. Eberhart and Y. Shi, “Particle swarm optimization: developments,
applications and resources,” in Proc. 2001 Congr. Evolutionary Computation, vol. 1,
2001.
[75] S. C. Clark, A. W. Morrison, and M. D. Guyse, “Practice safer GPS navigation – get
protection,” Raytheon Syst. Limited Internal Rep., 2005.
[76] R. M. Gray and L. M. Robinson, Statistical Signal Processing. New York:
Cambridge University Press, 2004.
[77] G. E. Shilov, Linear Algebra. New York: Dover, 1977.
[78] F. Bowman, Introduction to Bessel Functions. New York, Dover, 1958.
[79] C. P. Mathews and M. D. Zoltowski, “Eigenstructure techniques for 2D angle
estimation with uniform circular arrays,” IEEE Trans. Signal Process., vol. 42, pp. 2395
2407, 1994.
[80] D. E. Dudgeon and R. M. Mersereau, Multidimensional Digital Signal Processing.
London, U. K.: PrenticeHall, 1984.
[81] G. M. Lee, N. N. Tam and N. D. Yen, Quadratic Programming and Affine
Variational Inequalities: A Qualitative Study. Springer, 2005.
[82] J. Robinson and Y. RahmatSamii, “Particle swarm optimization in
electromagnetics,” IEEE Trans. Antennas Propag., vol. 52, no. 2, pp. 397407, Feb.
2004.
[83] M. G. Andreasan, “Linear arrays with variable interelement spacing,” IEEE Trans.
Antennas Propagat., vol. AP10, pp. 137143, Mar. 1962.
156
[84] I. J. Gupta and A. K. Ksienski, “Effect of mutual coupling on the performance of
adaptive arrays,” IEEE Trans. Antennas Propag., vol. AP31, pp. 785791, May 1983.
[85] Z. Huang and C. A. Balanis, “Mutual coupling compensation in UCAs: simulations
and experiement,” IEEE Trans. Antennas Propag., vol. 54, pp. 30823086, Nov. 2006.
[86] R. Tanner and J. Woodard, WCDMA: Requirements and Practical Design. New
York: Wiley, 2004.
[87] R. Khanna and R. Saxena, “Performance improvement in array processing
architectures of WCDMA systems by low side lobe beamforming,” IEEE Int. Conf.
Personal Wireless Comm., pp. 324328, Jan. 2005.
[88] M. Ghavami, “Wideband Smart Antenna Theory Using Rectangular Array
Structures,” IEEE Trans. Signal Process., vol. 50, pp. 21432151, Sep. 2002.
[89] K. Nishikawa et al., “Wideband beamforming using fan filter,” in Proc. ISCAS,
1992, pp. 533536.
[90] M. Ghavami and R. Kohno, “Recursive fan filters for broadband partially adaptive
antenna,” IEEE Trans. Commun., vol. 48, pp. 185188, Feb. 2000.
[91] K. R. Carver and J. W. Mink, “Microstrip antenna technology,” IEEE Trans.
Antennas Propag., vol. AP22, pp. 224, Jan. 1981.
[92] H. Lebret and S. Boyd, “Antenna array pattern synthesis via convex optimization,”
IEEE Trans. Signal Processing, vol. 45, pp. 526532, Mar. 1997.
[93] D. B. Ward, Z. Ding and R. Kennedy, “Broadband DOA estimation using frequency
invariant beamforming,” IEEE Trans. Signal Processing, vol. 46, pp. 14631469, May
1998.
[94] K. Wong and W. Hsu, “A broadband rectangular patch antenna with a pair of wide
slits,” IEEE Trans. Antennas Propag., vol. 49, pp. 13451347, Sep. 2001.
[95] K. Tong, K. Luk, K. Lee and R. Q. Lee, “A broadband Uslot rectangular patch
antenna on a microwave substrate,” IEEE Trans. Antennas Propag., vol. 48, pp. 954960,
June 2000.
ANTENNA ARRAYS: PERFORMANCE LIMITS AND GEOMETRY OPTIMIZATION by Peter Joseph Bevelacqua
has been approved March 2008
Graduate Supervisory Committee: Constantine A. Balanis, Chair Joseph Palais Abbas AbbaspourTamijani James Aberle Cihan Tepedelenlioglu
ACCEPTED BY THE GRADUATE COLLEGE
ABSTRACT The radiation pattern of an antenna array depends strongly on the weighting method and the geometry of the array. Selection of the weights has received extensive attention, primarily because the radiation pattern is a linear function of the weights. However, the array geometry has received relatively little attention even though it also strongly influences the radiation pattern. The reason for this is primarily due to the complex way in which the geometry affects the radiation pattern. The main goal of this dissertation is to determine methods of optimizing array geometries in antenna arrays. An adaptive array with the goal of suppressing interference is investigated. It is shown that the interference rejection capabilities of the antenna array depend upon its geometry. The concept of an interference environment is introduced, which enables optimization of an adaptive array based on the expected directions and power of the interference. This enables the optimization to perform superior on average, instead of for specific situations. An optimization problem is derived whose solution yields an optimal array for suppressing interference. Optimal planar arrays are presented for varying number of elements. It is shown that, on average, the optimal arrays increase the signaltointerferenceplusnoise ratio (SINR) when compared to standard arrays. Sidelobe level is an important metric used in antenna arrays, and depends on the weights and positions in the array. A method of determining optimal sidelobeminimizing weights is derived that holds for any linear array geometry, beamwidth, antenna type and scan angle. The positions are then optimized simultaneously with the optimal weights to determine the minimum possible sidelobe level in linear arrays.
iii
with different antenna elements. iv . Minimizing sidelobes is then considered for 2D arrays. The positions are again simultaneously optimized with the weights to determine optimal arrays. weights and sidelobe levels. and different antenna elements. A method of determining optimal weights in symmetric 2D arrays is derived for narrowband and wideband cases.Results are presented for arrays of varying size. and for distinct beamwidths and scan angles. beamwidths. This is done for arrays with varying number of elements. bandwidths.
v . James Aberle and Dr.ACKNOWLEDGEMENTS This work would not have been possible without my adviser. Lee Boyce from Stanford. His guidance and helpfulness were paramount in producing successful research. Dr. I would like to thank Dr. personal. which ultimately led to the work presented here. John Schneider from Washington State. but they are too numerous to list here. Dr. specifically during my qualifying and comprehensive examinations. including Zhiyong Huang. Many other people have in some way contributed to my education. and financial assistance along the way. Constantine Balanis. Bo Yang. My thanks go to my colleagues at ASU. Dr. and Aron Cummings. Shira Broschat and Dr. Dr. and my parents. I would like to thank Dr. AbbaspourTamijani. Thanks also to Dr. The presence of these people increased the quality of my research and life in various ways during my time at ASU. I am indebted to many people for academic. Victor Kononov. Dr. Joseph Palais. Cihan Tepedelenlioglu for taking the time to be on my research committee and for helpful suggestions along the way. This work is the culmination of approximately 10 years of college education. Andreas Spanias for helping with my qualifying exam and in understanding Fourier Transforms. Balanis let me into his research group and gave me funding to research array geometry. without this the work would not have been completed due to my youthful impatience and wavering trajectory. Of these. Gang Qian and Dr.
. . . . . . . . . . . . . . 21 III. . . . . . . . . . . . . . . FUNDAMENTAL CONCEPTS OF ANTENNA ARRAYS. . . . . . . . . . . . .2 Antenna Characteristics. . . . . . . . .8 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Antenna Arrays. . . . . . . . . . . . . . . . . . . . . . . .5 Spatial Processing Using Antenna Arrays. . . . . . . . . . . . . . . . . . . 11 2. . . . . 24 3. . . . . . . . . . . . . . . . . . . . . . . . . 34 IV. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Schelkunoff Polynomial Method . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2. . . . . . . . . . xii CHAPTER I. . . . . . . . . . . . . . .2 Literature Survey. . . . . . . . . . . . . . . . . . . . . . . . . 1 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 DolphChebyshev Method. . . . . . . . . . . . .6 Aliasing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .TABLE OF CONTENTS Page LIST OF TABLES . . . . . . . . . 38 vi . . . . . . . . . . . . . . . . . . . . . 27 3. . 8 2. . . 1 1. . . . .1 Introduction. . . . . . . . . . . . . . . . . . . . . . 16 2. . . . . . 25 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . WEIGHTING METHODS IN ANTENNA ARRAYS . . . . . . . . . . . . . . . . . . . . . METHODS OF ANTENNA ARRAY GEOMETRY OPTIMIZATION. . . . . . . . . . . . . . . . . . . . . . .1 Introduction. . . . . . . . . . . .2 PhaseTapered Weights. . . . . . .29 3. . . . . . . . . . . . . . . . . . . .5 Minimum MeanSquare Error (MMSE) Weighting . . . . . . . . INTRODUCTION. . . . . . . . . . . . 13 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 II. . . . . . . . . . . . .3 Wireless Communication. . . . . . . . . . . . . . . . . . . . . . . .ix LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . .6 The LMS Algorithm. . . . . . . . . . . . 24 3. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Evaluating the Performance of Optimal Arrays. . . . . . . . . . . . .86 6. . 61 5. . . . . . . . . . . . . . . . . . . .59 5. . . . . . . . . . .2 Linear Programming. . . . . . . . . . . . . . . . . . . . .2 Problem Setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5. .3 Determination of Optimum Weights for an Arbitrary Linear Array. . . . . . . . . . . . . . . . . . . . . . . . 60 5. . . . . . . . . . . .6 Array of Dipoles Scanned to Broadside. . . . . . 72 5. . . . . . . . . . . . . . . . . . . . . . . 68 5. . . . . . . . . . . . . . . .4 Planar Array with Uniform Interference at Constant Elevation. . 48 4. . . . . . .CHAPTER Page 4. .7 Summary. . . . . 55 V. . . . .3 Optimization for Interference Suppression. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Particle Swarm Optimization (PSO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4. . . . . .1 Introduction. . . . . . . . . . .1 Introduction. . . 77 VI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4. . . . . . . . . . . .4 Simulated Annealing. . . . 92 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Mutual Coupling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 6. . . . . . . . . . . . . .5 Array Scanned to 45 Degrees . . . .4 Broadside Linear Array. . . . . . . . . . . . 95 6. . . . . . . . . . ARRAY GEOMETRY OPTIMIZATION FOR INTERFERENCE SUPPRESSION. . . 78 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Interference Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 6. . . . . 99 vii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 6. . . .1 Introduction. . . . . . . . . . . . . MINIMUM SIDELOBE LEVELS FOR LINEAR ARRAYS. . . . . . . . 65 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Convex Optimization. . . . . . . . . . . . . . . . .5 Using Simulated Annealing to Find an Optimal Array. . . . . . . .
. . . . . . . . .3 SidelobeMinimizing Weights for TwoDimensional Arrays. MINIMIZING SIDELOBES IN PLANAR ARRAYS. . . . . . .8 Optimal Wideband Arrays of Omnidirectional Elements. . . . . . . . . . . . 146 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 TwoDimensional Symmetric Arrays.133 7. . . . . .7 Wideband Weighting Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .122 7. . . . . . . . . . . . . . . . . . . . . . . .10 Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Symmetric Arrays of Omnidirectional Elements. . . . . . . . . . . .8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . .2 Future Work. . . . . 146 8. . . . . 151 viii . . . . . . . . 138 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 7. . . . . . . . . 110 7. . . . . . . . . . . . . . 144 VIII. . . . . 148 REFERENCES. . . . . . . . .129 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .CHAPTER Page 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 VII. . . . . . . . . . . . . . . . . . . . . . . . . . . . . CONLUSIONS. . . . . . . . . . . . . . . . . . . . . AND FUTURE WORK. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SUMMARY. 115 7. . .4 SidelobeMinimizing Weights for Scanned TwoDimensional Arrays. . . . . . . . . . . . . 102 7. . . . . . . . . . . . . . . . . . . .9 Optimal Wideband Arrays of Patch Antennas. . . . . . . . .1 Introduction. . . . . . . 104 7. . . . . . . . . . . . . . . . . . . . . . . . . .1 Summary and Conclusions. . . . . .6 Symmetric Arrays of Patch Antennas. . . 105 7.
. . 89 VII. . . . . . . 88 VI. . . . . . . VIII. . . . . . . . . . . . . . . . . . . . . . . . . . . θ d = 90° ). . . . . . . . . . . . θ d = 45° ). III. . . . . XIV. . . . . . . . 90 OPTIMUM ELEMENT POSITIONS (IN λ ) FOR CASE 1 (BW= 60°. . . . . . . θ d = 90° ). . . . . . . . . . .LIST OF TABLES Table I. . . . . . . . IV. . . . . . . . . . . . . . . . . . V. . . . . . . . . . . . . . . . . . . . . . . 90 IX. . . . . . . θ d = 45° ). . . . . . . . . . 73 RELATIVE SIR FOR CASE 1. . . . . . . . . . . . . . . OPTIMUM WEIGHTS FOR CASE 2 (BW= 30°. . . . . . . . . θ d = 90° ). . . . . . . . . . . . . . . . . θ d = 90° ). . . . . . . . XII. . . . . . . θ d = 45° ). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 NUMBER OF PARTICLES REQUIRED FOR CONVERGENCE FOR VARYING ARRAY SIZE WITH SIMULATION TIME. . 93 XIII. . . . . . . OPTIMUM WEIGHTS FOR CASE 2 (BW= 60°. . . . . 76 RELATIVE SIR FOR CASE 3. II. . . . . . . . . . OPTIMUM WEIGHTS FOR CASE 1 (BW= 60°. . . 93 OPTIMUM ELEMENT POSITIONS (IN λ ) FOR CASE 2 (BW= 30°. . . . . . . . OPTIMUM ELEMENT POSITIONS (IN λ ) FOR CASE 1 (BW= 30°. . . . . . . . . 92 XI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . OPTIMUM WEIGHTS FOR CASE 1 (BW= 60°. . . . . . . X. . . . . Page OUTPUT POWER COMPARISON AMONG DIFFERENT ARRAYS . . . . . . . . . . 94 OPTIMUM ELEMENT POSITIONS (IN λ ) FOR CASE 1 WITH DIPOLES (BW= 60°. . . . . . θ d = 90° ). . . θ d = 45° ). . 76 RELATIVE SIR FOR CASE 2. . . . . . 96 ix . . 89 OPTIMUM ELEMENT POSITIONS (IN λ ) FOR CASE 2 (BW= 30°. . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . XIX. XXV. . . OPTIMUM WEIGHTS FOR CASE 1 WITH DIPOLES Page (BW= 60°. . . . . . . . . . . . . . . . 114 NUMBER OF REQUIRED PARTICLES FOR PSO AND COMPUTATION TIME FOR N=47. . . . . 117 XXI. . . . . . . . . . OPTIMAL SLL AND POSITIONS FOR CASE 1 (DIMENSIONS IN λ ). . . . OPTIMUM ELEMENT POSITIONS (IN λ ) AND SLL FOR CASE 2 WITH DIPOLES (BW= 30°. . . . . . . .126 XXVII. . . . . . . . . . . θ d = 90° ) . . . . . . . . . . . . . 97 \ XVIII. . . 118 XXII. . . . . . . . . . . . . OPTIMAL WEIGHTS FOR CASE 1 . . . . . . . . . . . . . . 97 XVII. . 128 x . . . . . . . . . . . . . . . . XX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XXIII. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . OPTIMAL WEIGHTS FOR 7ELEMENT HEXAGONAL ARRAY.110 OPTIMAL WEIGHTS WITH ASSOCIATED POSITIONS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Table XV. . . . . . . . . . . . . . . . . OPTIMUM WEIGHTS FOR CASE 2 WITH DIPOLES (BW= 30°. . . . . . . . . . . . . . . 96 XVI. . . . . . . . . . . . . . . . . . . . . . θ d = 90° ) . . . . . . . . . . . . . . 122 OPTIMAL SLL AND POSITIONS FOR CASE 1 OF PATCH ELEMENTS (UNITS OF λ ). . 125 XXVI. . OPTIMAL SLL AND POSITIONS FOR CASE 2 OF PATCH ELEMENTS (UNITS OF λ ). . . θ d = 90° ). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 OPTIMAL SLL AND POSITIONS FOR CASE 2 (DIMENSIONS IN λ ) . . . OPTIMAL WEIGHTS FOR CASE 2 . . . . . . . . OPTIMAL WEIGHTS FOR CASE 1 WITH PATCH ELEMENTS . . . . . . . . . . . 121 XXIV. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . .2). . . . . . . . . . . . . . . OPTIMAL SLL AND POSITIONS FOR OMNIDIRECTIONAL ELEMENTS (UNITS OF λc . . . . . . . . . . . . . . . . . . . . . . . . . .5). . . . . . . . . . . . . . . . . FBW=0. . . . . . . .Table Page XXVIII. . . . . 135 XXX. . . . . FBW=0. . . . . . . . . . . . 141 xi . . . OPTIMAL WEIGHTS FOR OMNIDIRECTIONAL ELEMENTS (FBW=0. .5). . . . 128 XXIX. . . . . . . . . OPTIMAL WEIGHTS FOR PATCH ELEMENTS (FBW=0. . . . 135 XXXI. . . . . .2) . . . . . . OPTIMAL WEIGHTS FOR CASE 2 WITH PATCH ELEMENTS .141 XXXII. . . . . OPTIMAL SLL AND POSITIONS FOR PATCH ELEMENTS (UNITS OF λ c . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . 50 15. . . . . . . . 8. . . . . . 10 Arbitrary antenna array geometry. . . . . . . . . . . Examples of nonconvex sets. . . . . . . . . . . Magnitude of array factor for optimal arrays (N=6) . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . Page Elevation (a) and azimuthal (b) patterns for a short dipole. . . . . 34 10. . . . . . . . . . Optimum N=6 element array (measured in units of λ ). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 11. . . . . . . . . . . . . . . . . Basic setup of a linear Nelement array. . . . . . . . . . . . . . . . . . 29 Array factor magnitudes for MMSE weights. . . . . . . . . . . MSE at each iteration. 9. . . . . . . . . . . . . . . . 25 Array pattern with weights from Schelkunoff method. . . .91 xii . . . . . . . . . 19 Magnitude of the array factor (dB) for 2d array. . . . . . . . . . . Examples of convex sets. . . . . . . . . . . . . . . . . . 74 20. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 12. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Illustration of a convex function. . . . . . . . . . . . . 15 Magnitude of array factor for N=5 elements. . . . . 3. . .LIST OF FIGURES Figure 1. . . Optimum N=7 element array (measured in units of λ ). . . Array factor for optimal weights found via linear programming. . . . . . . . . . . . . . . . . . . . . . Optimum N=4 element array (measured in units of λ ). . 6. along with the optimal MSE. 14 Spatial processing of antenna array signals. . 27 DolphChebyshev array for N=6 with sidelobes at 30 dB. . . . . . 4. . . 71 18. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . 21 Array factor of steered linear array. . . . . . . . . . . . . . . . . . . . 70 17. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Optimum N=5 element array (measured in units of λ ). . . . . . . . . . . . . . . . . . . . . . . . . . 51 16. . . . . . .49 14. . . . . . . . . 80 21. 72 19. . . . . . . . . . . . . . . Symmetric linear array. . . . . . . . 2. . . . . 48 13. . . . . . . . . . . . . . .
. Array factors for optimal weighted and phasetapered array ( φ = 45° ). . . . . . . . . 121 38. . . . . . 123 40. . . . .95 25. . . . . . . . . . N=7. . . . . . . . . . Optimal symmetric patch array locations for Case 1 (units of λ ). 91 23. . . Suppression region for an array scanned away from broadside. . Azimuth plot of array factors with optimal and phasetapered weights. . . . . . . . Magnitude of patch pattern (in dB) . . .95 24. 107 29. . .115 35. . . . . . . . Magnitude of T (θ ) at distinct azimuthal angles (Case 2). . Magnitude of T (θ ) at distinct azimuthal angles (Case 1). . . . Arbitrary planar array. . . . . . . . . . . . . . . . . . .Figure Page 22. . . . 98 27. . . . . . . . . . . . . . . . . Magnitude of array factor for optimal arrays (N=7) . . . . 109 30. (a) elevation plot. . . AF for phasetapered weights. Suppression region for twodimensional arrays. . . . . . Elevation plot of array factors with optimal and phasetapered weights. . . . . . . 121 39. . . . . . . . Optimal symmetric array locations for Case 2 (dimensions in λ ). . Magnitude of array factor for optimal arrays (N=6) . . . . . 110 31. . . . . . Optimal symmetric array locations for Case 1 (dimensions in λ ). . . . . . . . . . . Array factors for optimal weighted and phasetapered array ( φ = 0° ). . . 125 xiii . . . . . . . . . . . . . . . . Magnitude of the total radiation pattern for optimal arrays of dipoles (N=7). . . . . . . . . 114 34. . . . . . 98 26. . . . . Magnitude of the total radiation pattern for optimal arrays of dipoles (N=6). . . . . . . . . . . . . . . . . . . . . . . . . . . 113 33. . . . . . . 112 32. . . . . 118 36. . . . . . . . . . (b) azimuth plot. . . . N=7. . . . . . . 104 28. . . . . . . . . . . . . . . . . . . . 119 37. . . . . . . Magnitude of array factor for optimal arrays (N=7) .
132 45. . . . Optimal symmetric patch array locations for Case 2 (units of λ ). . Magnitude of T (θ ) at distinct azimuth angles (N=7) for f = f c . N=7 (patch). . . Optimal symmetric array locations for FBW=0. . . . Magnitude of T (θ ) at distinct azimuth angles (N=7) for f = f L . . . . N=7 (patch). Magnitude of T (θ ) at distinct azimuth angles (N=7) for f = f L . 126 42. Magnitude of T (θ ) at distinct azimuth angles (N=7) for f = fU . . .142 51. . . . . . Suppression region for twodimensional arrays over a frequency band. 137 48. . . 128 44. .2 (units of λc ). . . . . . . . . . . . . 143 52. 134 46. . . . . 140 50. . . . Optimal symmetric patch array locations for FBW=0. . Magnitude of T (θ ) at distinct azimuth angles (N=7) for f = f c . . . . . . . . . . . . . .136 47. . . . . . . . . . . . . . 127 Figure Page 43. . . 138 49. . . . . . . . . . . . . 144 xiv . .5 (units of λc ). . . . .41. . . . . Magnitude of T (θ ) at distinct azimuth angles (Case 2). . Magnitude of T (θ ) at distinct azimuth angles (N=7) for f = fU . Magnitude of T (θ ) at distinct azimuth angles (Case 1). . . . . . . . . .
clearly had significant room for improvement.1. The envelope has been pushed in every imaginable direction. As the field of signal processing developed. The concept of an antenna array was first introduced in military applications in the 1940s [2]. However. signal processing. 1901. despite the tremendous advances since the days of Marconi in each of these fields. Development in the fields of electronics. Consequently. with no letup in progress likely in the foreseeable future. and antenna theory have all contributed to the ubiquity of wireless communication systems today. Guglielmo Marconi successfully received the first transatlantic radio message [1]. information theory. INTRODUCTION 1. A century of improvement in the field of wireless communication has occurred. This event was arguably the most significant achievements in early radio communication. while technically functional. Overview On December 12. the desire for improved wireless communication systems has not been quenched. arrays could be used to mitigate . This development was significant in wireless communications as it improved the reception and transmission patterns of antennas used in these systems. This communication system. The array also enabled the antenna system to be electronically steered – to receive or transmit information primarily from a particular direction without mechanically moving the structure. The message was the Morsecode for the letter ‘S’ – three short clicks.I. arrays could be used to receive energy (or information) from a particular direction while rejecting information or nulling out the energy in unwanted directions.
The reason for this lies in the mathematical complexity of dealing with the optimization of the element positions for various situations. While there has been a large amount of work on the signal processing aspects (and in conjunction. Array geometry optimization can therefore be hoped to contribute to the continuing advancement of wireless communication system performance. the electronics used to implement the algorithms). Further development in signal processing led to the concept of adaptive antenna arrays. It will be shown that performance gains can be obtained via intelligent selection of the array geometry. These arrays adapted their radiation or reception pattern based on the environment they were operating in. . This again significantly contributed to the capacity available in wireless communication systems. As shown in Chapter 2. the physical geometry (or location of the antenna elements in the array) has received relatively little attention. optimization of the element positions in an antenna array (for various situations) is now tractable. The primary goal of this dissertation is to study the influence of array geometry on wireless system performance.2 intentional interference (jamming) or unintentional interference (radiation from other sources not meant for the system in question) directed toward the communication system. Thanks to the tremendous advances in numerical computing. understanding the influence of the element weighting (which is a major component of the signal processing involved in antenna arrays) is significantly simpler than understanding the effect of varying the positions of the elements.
array geometries) are presented for a specific situation and the gains in performance are illustrated.3 This dissertation is organized as follows. Chapters 57 represent the author’s original research for this dissertation. and beamwidth. namely interference suppression in an adaptive array. coupled with a geometrical optimization routine. An optimization problem is derived whose solution yields an optimal array for a given interference environment. The method of weight selection is extended from the linear to the planar case along with the geometrical optimization routine. The methods are employed on arrays of varying size and beamwidths. scan angle. This method of weight selection. yield a lower bound on sidelobe levels in linear antenna arrays. as defined in that chapter. Solutions of this optimization problem (that is. Chapters 24 are primarily a collection of other’s work. Chapter 7 deals with the determination of minimum sidelobe levels in planar or twodimensional arrays. A method of determining the optimal sidelobeminimizing weight vector is determined that holds for an arbitrary antenna type. and with different types of antenna elements. Chapter 4 discusses methods of choosing the weighting vector applied in the antenna array. Chapter 6 deals with the minimum possible sidelobe level for a linear antenna array with a fixed number of elements. Twodimensional arrays . Chapter 2 introduces the main ideas and terminology used in understanding antenna arrays. The minimum sidelobe levels of arrays with an optimized geometry are compared to those with a standard (or nonoptimized) geometry. Chapter 3 discusses various optimization methods used in this work. Chapter 5 deals with a specific problem in a wireless communication system.
It was noted that in large. In 1960. King [4] proposed eliminating grating lobes via element placement in an array. removing some of the elements did not noticeably degrade the array’s performance. periodically spaced antenna arrays. For large arrays. Finally. 1. the problem was tackled in a statistical fashion to avoid the excessive amount of computation time required to determine an optimal thinned array [7]. Chapter 8 summarizes the important results and presents conclusions based on the solutions. Optimal geometries are then presented for the wideband case for arrays made up of both omnidirectional and patch antenna elements.2. Literature Survey The first articles on improving array performance via geometry optimization dates back to the early 1960s. In 1961. Unz [3] studied linear arrays in 1960 and noted that performance improvement could be obtained by holding the weights constant and varying the element positions. The concept of ‘thinned arrays’ was introduced in the early 1960s as well. and made up of different antenna types. This method of altering an array’s geometry was introduced by Skolnik et al. The remainder of this chapter presents a literature survey of previous research on array geometry optimization. . Harrington [5] considered small element perturbations in an attempt to synthesize a desired array pattern. [6] and was first studied deterministically – attempting to systematically determine the minimum number of elements required to achieve a desired performance metric. The narrowband assumption is then discarded and optimal weights are derived for the wideband situation.4 are optimized of varying sizes and beamwidths. future problems of interest are discussed.
[11] also evaluated the directionfinding performance of arrays by varying the elements’ positions based on a genetic algorithm. or comparing signal power at spatially distinct locations and processing the signals based on their relative strength. some of the elements will lie in the region where the ideal source has a small excitation. Array geometry plays a critical roll in the directionfinding capabilities of antenna arrays. this method was also extended to circular and spherical arrays [14]. they proposed that larger distinctions lead to less ambiguity in direction finding.5 Stutzman [8] introduced a simple method of designing nonuniformly spaced linear arrays that is based on Gaussianquadrature that involves fairly simple calculations. A . The method does not guarantee a global optimum for the element positions. and thus can be omitted from the array (another method of array thinning). Pillai et al. This method requires a specified array pattern and set of weights.4 λ [12]. [9] shows that for linear aperiodic arrays. A textbook proof analyzing uniformly distributed multipath components suggest arrays will exhibit good diversity characteristics if the antennas are separated by at least 0. In [15] the weights are optimized and then linear array scaled to find an optimal geometry. it then attempts to determine an array geometry that closely approximates the desired array pattern. In addition. Gavish and Weiss [10] compared array geometries based on how distinct the steering vectors are for distinct signal directions. Ang et al. Antenna arrays are also used for diversity reception. An analytical method of choosing a linear array geometry for a given set of weights is presented in [13]. there exists an array that has superior spatialspectrum estimation ability. he showed that by appropriate scaling of the element spacings.
the minimum possible sidelobe level for an array is of interest.6 method of perturbing element positions to place nulls in desired directions is described in [16]. PSO methods were used for planar array synthesis in minimizing sidelobes. Tennant et al. In addition to geometry considerations. array geometry optimization has been under investigation recently using biologically inspired algorithms. This method has an implicit maximum array spacing for a given beamwidth [22]. For linear. In [18]. [19] used a genetic algorithm to reduce sidelobes via element position perturbations. Khodier and Christodoulou [17] used the Particle Swarm Optimization (PSO) method to determine optimal sidelobeminimizing positions for linear arrays assuming the weights were constant. along with nullplacement. The method returns the minimum possible nulltonull beamwidth for a specified sidelobe level (or equivalently. such as Genetic Algorithms (GA). there exists a set of weights that give a smaller nulltonull main beam than Dolph’s method. the authors demonstrate sidelobe minimization by choosing a geometry based on the Ant Colony Optimization (ACO) method. However. equally spaced arrays. The DolphChebyshev method produces . the minimum possible sidelobe level for a specified nulltonull beamwidth). Riblet [23] showed that for arrays with interelement spacing less than λ / 2 . In [20]. the problem of determining the optimal weights was solved by Dolph and published in 1946 [21]. Riblet only derives the results for arrays with an odd number of elements. This method is known as the DolphChebyshev method. because Dolph uses Chebyshev polynomials to obtain the excitation coefficients. Due to the large increase in the computational capability of computers.
A more generalized version of Dolph’s algorithm (called an equiripple filter) is also frequently used in the design of Finite Impulse Response (FIR) filters in the field of signal processing [24]. In 1953. They make no claim that their results are optimal. . The array geometry is shown to have a significant impact on the MIMO channel properties. the problem was not solved for the general case [26]. this work is the subject of Chapter 5. The effect of array geometry on wireless systems in urban environments using MultipleInput MultipleOutput (MIMO) channels has been studied [31]. the impact of geometry on performance was studied by considering standard arrays such as the uniform linear array. The general case of nonuniform arrays with arbitrary scan angle.7 sidelobes that have equal amplitudes. Dolph’s work was also considered for the case of nonisotropic sensors. Because of the difficulty in examining array geometry and determining an optimal array. The optimum sidelobeminimizing weights for broadside. Optimizing an adaptive antenna array’s geometry was performed in [30] with regards to suppressing interference. Adaptive antenna arrays began with the work of Bernard Widrow in the 1960s [29]. but do show the sidelobes lowered via the optimization method. DuHamel extended the work of Dolph to endfire linear arrays with an odd number of elements [25]. nonuniformly spaced symmetric linear arrays with real weights can now be found using linear programming [27]. beamwidth and antenna pattern is derived in Chapter 6. including the channel capacity. The effect of array orientation on MIMO wireless channels was investigated in [32]. the authors attempt to simultaneously optimize the weights and the positions of a 25element linear array using a Simulated Annealing (SA) algorithm. In [28].
measured off the xaxis). S. FUNDAMENTAL CONCEPTS OF ANTENNA ARRAYS 2. array performance improves with added elements. . as in the AN/FPS85 Phased Array Radar Facility operated by U. 2. The coordinates are illustrated in Figure 1. y. The radiation pattern is also a function of frequency. measured off the zaxis) and φ (azimuth angle ranging from 0 to 2π . so the two can be discussed interchangeably. therefore arrays in practice usually have more elements. leading to their widespread use in wireless applications. A physical antenna has a radiation pattern that varies with direction. The array has the ability to filter the electromagnetic environment it is operating in based on the spatial variation of the signals present. Arrays can have several thousand elements. By reciprocity. There may be one signal of interest or several.2. Air Force [33]. an array of antennas does a superior job of receiving signals when compared with a single antenna.II. The methods by which an antenna array can process signals in this manner are discussed following an elementary discussion of antennas. Antenna Characteristics Throughout this dissertation. the radiation pattern is the same as the antenna’s reception pattern [34]. Put simply. a Cartesian coordinate system with axis labels x. along with noise and interfering signals. Arrays in practice can have as few as N=2 elements. Introduction An antenna array is a set of N spatially separated antennas.1. and z will be used along with spherical coordinates θ (polar angle ranging from 0 to π . which is common for the receiving arrays on cell phone towers. In general.
In practice. in order of increasing distance from the antenna. φ ) . both inequalities are achieved for R>2 meters. the farzone field radiated by a short dipole of length L with uniform current I is given by [33]: jILη 0 e − jkR F ( R. φ ) = sin θ 2R (2. θ .4) . are commonly called the reactive nearfield region. describes the angular variation in the reception pattern of the antenna. (2. This function. the farfield region occurs when the following two conditions are met: R> 2D 2 λ (2.1) (2. the radiating nearfield (Fresnel) region and the farfield (Fraunhofer) region [22]. φ ) = sin θ .9 GHz with an antenna length of roughly D=4 cm. For example. antennas communicate in the farfield region. φ ) . except where noted.3) where j = − 1 . η0 is the impedance of free space. denoted by f (θ . θ . For an antenna of maximum length D. For the short dipole. The radiation pattern takes different shapes depending on how far the observation is from the antenna – these regions. and this is assumed throughout. For a modern cellular phone operating at 1. it will be assumed a single frequency is of interest (described by the corresponding wavelength λ ). and k = 2π / λ is the wavenumber. the normalized field pattern is expressed as f (θ .2) R >> λ . The normalized field pattern will be of frequent interest in this work.9 however. The radiated farzone field of an antenna will be described by the function F ( R.
D. usually an isotropic radiator. The higher the directivity. Directivity. An antenna with a directivity of 1 (or 0 dB) would be an isotropic source.5) ∫ ∫ [ f (θ .5 (1. can be calculated from D= 4π 2π π . φ )] 2 0 0 The directivity of the short dipole discussed previously is 1. Directivity (or maximum directivity) is an important antenna parameter that describes how much more directional an antenna is from a reference source. (a) Elevation Pattern (b) Azimuthal Pattern Figure 1. the more pointed or directional the antenna pattern will be. The horizontal axis in Figure 1(a) can be the xor yaxis. all actual antennas exhibit a directivity higher than this. (a) Elevation and (b) azimuthal patterns for a short dipole.76 dB). .10 This field pattern is plotted in Figure 1. sin θ dθ dφ (2. due to symmetry the elevation pattern will not change.
The preceding discussion will be sufficient for the purposes in this work. Information today is primarily encoded into digital form. The polarization of an antenna is the same as the polarization of its radiated fields. however. The polarization of the radiated field is the figure traced out by the electric field at a fixed location in space as a function of time. The polarization of the short dipole is linear. it will be assumed that the antennas are properly matched in polarization to the desired waves. part or all of the energy will not be detected by the antenna [22]. Common polarizations are linear. 2. If an antenna is attempting to receive a signal from an electromagnetic wave. they are also used for detection [37]. In the earlier days of radio.3. The information is still encoded into the amplitude and phase of these symbols. Further information on antennas can be found in several popular textbooks [22. The information to be transmitted or received will be represented by m(t). and m(t) is a train of a discrete set of symbols representing 1s and 0s. however. m(t) had the information coded directly into the amplitude or frequency of the signal (as in AM or FM radio). In this dissertation. the amplitudes and phases now take on a discrete set of values. meaning almost all the energy has frequency content below B Hz. Wireless Communication The primary purpose of antenna systems is for communication. In the most basic form of digital . 3536].11 Antennas are further described by their polarization. elliptical and circular polarization. If the wave is not matched to the antenna. it must be matched to the polarization of the incoming wave. The message m(t) will be assumed to be bandlimited to B Hz. unless otherwise noted.
12 communication, binary phase shift keying (BPSK), m(t) is either +1 or 1 (representing a 1 or a 0), so that the information is encoded into the phase. Note that m(t) can be complex, where the real part represents the inphase component of the signal and the imaginary part corresponds to the quadrature component [38]. Digital communication is used because of its high data rate, lower probability of error than in analog communication (along with errorcorrecting codes), high spectral efficiency and high power efficiency [12]. The message m(t) is then modulated up to the frequency used by the antenna system. The transmitted signal s(t) is given by s(t ) = m(t )e j 2πf c t (2.6)
where f c is the carrier (or center) frequency used by the antenna system. Note that in general B << f c . Typically, the energy then lies within the frequency spectrum in a very narrow band around f c , so that the transmitted signal is assumed to be a monochromatic plane wave. If the signal is sufficiently broadband that the narrowband assumption cannot be applied, the signal can be processed by filtering it into distinct narrow bands and processing each separately. In the far field the narrowband signal will have the characteristics of a monochromatic plane wave. Assume that the wave is traveling in the direction defined by (θ , φ ) relative to a reference point (for instance, a receiving antenna). The wavevector
k is defined to represent the magnitude of the phase changes along the x, y, and z
directions:
13
k = (k x , k y , k z ) =
2π
λ
(sin θ cos φ , sin θ sin φ , cos θ ) .
(2.7)
The spatial variation of the signal can then be written as S ( x, y, z, t ) = s (t )e − j ( k x x+ k y y + k z z ) . (2.8)
Defining the position vector as R=(x,y,z), (2.8) can be written more compactly as
S (R, t ) = s(t )e
− jk⋅R
.
(2.9)
Digital signal processors operating on a single antenna can only process signals based on their time variation. Spacetime filters process signals based on their spatial and temporal variation [39]. In order to do spatial filtering, an array of sensors is required. 2.4. Antenna Arrays The basic setup of an arbitrary antenna array is shown in Figure 2. The location of the n th antenna element is described by the vector d n , where
d n = [x n y n z n ] .
(2.10)
The set of locations of an Nelement antenna array will be described by the Nby3 matrix D, where
d1 d D = 2 . M d N
(2.11)
When the array is linear (for example, all elements placed along the zaxis), the matrix D can be reduced to a vector.
14
Figure 2. Arbitrary antenna array geometry. Let the output from the n th antenna at a specific time be X n . Then the output from antenna n is weighted (by wn ), and summed together to produce the antenna array output,
Y, as shown in Figure 3. See chapter 3 for a discussion of weighting methods. The array
output can be written as
Y=
∑w
n =1
N
n
Xn .
(2.12)
Defining
X1 X X= 2 M X N
and
(2.13)
12) can be rewritten in compact form as Y = WT X . (2. M wN then (2. .15) where T represents the transpose operator. Spatial processing of antenna array signals.15 w1 w W = 2 . Figure 3.14) (2.
17) The quantity in parenthesis is referred to as the array factor (AF). z n ) .9) is incident upon an Nelement antenna array. (2. y (t ) = s(t ) f (k ) ∑ wn e n =1 (2. This factoring is commonly called pattern multiplication.16) n =1 If the elements are identical. d n = (0. i = 1. N ). Let the normalized field pattern for each antenna be described as a function of the wavevector (k) and be represented by f(k).7).16 2. A very general form for the output of an array is when there are G incident signals (with wavevectors k i . 2. i = 1. K. multiplied by the element factor and the array factor. 2. The array output is then y (t ) = ∑ w s(t )e n N − jk⋅dn f (k ) . (2. Then the output is y (t ) = ∑ ∑ s (t ) f i N G n (k i ) wn e − jk i ⋅dn . the AF reduces to (2.16) reduces to N − jk⋅dn .5. Using (2. 0. Spatial Processing Using Antenna Arrays Suppose the transmitted signal given by (2.19) .KG ) incident on N antennas with distinct patterns (given by f i (k ). (2.18) n =1 i =1 For onedimensional arrays with elements along the zaxis (linear array). Hence. and it is valid for arrays with identical elements oriented in the same direction. the output is proportional to the transmitted signal.
An Nelement array will be analyzed. n =0 N −1 n (2. (2. 0. the array can filter signals based on their polar angle θ but cannot distinguish arriving signals based on the azimuth angle φ .20) n =1 The onedimensional array factor is only a function of the polar angle.22) Using the identity 1− cN ∑c = 1− c .17 N −j 2π zn cosθ AF = ∑w e n λ . nλ / 2) .21) can be written as − jNπ cosθ jπ cosθ 1 − e AF = e − jπ cosθ 1− e .23) it follows that (2. the array factor becomes [22] N −j 2π ( xn sinθ cosφ + yn sinθ sinφ ) .24) .21) AF = ∑ wn e n=1 λ The array factor is a function of both spherical angles and can therefore filter signals based on their azimuth and elevation angles. (2.19) reduces to AF = ∑e n =1 N − jnπ cosθ . (2. (2. The effect of the array on the received signal as a function of the angle of arrival is now illustrated by examining the array factor. Hence. Then (2. For simplicity let wn = 1 for all n. For twodimensional arrays with elements on the xy plane. and let d n = (0.
the response of the array factor is strongly influenced by the specific geometry (D) used. Selection of the weights is a simpler problem. hence. which is the subject of Chapter 3. Manipulation of the weights will allow the array factor to be tailored to a desired pattern. as they array factor is a linear function of the weights. optimizing array geometry is highly nonlinear and exponentially more difficult.25) The magnitude of the array factor is plotted in Figure 4 for an array with N=5 elements. The magnitude of the array factor shows that the array will receive (or transmit) the maximum energy when θ = 90 ° . the above equation simplifies to − j Nπ cosθ 2 jπ cosθ e AF = e π − j cosθ e 2 Nπ cos θ sin 2 . normalized so that the peak of the array factor is unity or 0 dB. . In addition. The array factor is a much more complicated function of the element positions.18 After factoring. sin π cos θ 2 (2.
As an example.e. important parameters of array factors include beamwidth and sidelobe level. The beamwidth is commonly specified as nulltonull or halfpower beamwidth. The positions for the N=9 element . The nulltonull beamwidth is the distance in degrees between the first nulls around the mainbeam. The sidelobe level is commonly specified as the peak value of the array factor outside of the mainbeam. wn = 1 for all n. Directivity can be calculated for an array factor in the same manner as that of an antenna. The halfpower beamwidth is the distance in degrees between the halfpower points (or 3 dB down on the array factor) around the mainbeam. In addition. Magnitude of array factor for N=5 elements.19 Figure 4. The weights will again be uniform. i. the array factor for a 3x3 rectangular array is examined.
27) e − j 3π sinθ cosφ / 2 e − j 3π sinθ sinφ / 2 ∗ AF = − jπ sinθ cosφ / 2 − jπ sinθ sin φ / 2 e e sin (3π sin θ cos φ / 2 ) sin (3π sin θ sin φ / 2) sin(π sin θ cos φ / 2) sin(π sin θ sin φ / 2) . .26) Applying the sum formula (2.2.26) reduces to 1 − e − j 3π sinθ cosφ AF = − jπ sinθ cosφ 1− e By factoring. (2. (2. (2.29) v= (2.54 dB down from the main lobe (which is normalized to 0 dB in the figure).27) can be written as 1 − e − j 3π sinθ sinφ − jπ sinθ sinφ 1 − e . 0) for a. the following variables will be introduced: u= kxλ = sin θ cos φ 2π kyλ 2π = sin θ sin φ . From (2. (2.23) twice.b=0. The sidelobes are 9. (2.21).20 array will be d ab = (aλ / 2. bλ / 2. (2. the array factor becomes AF = b=0 a =0 ∑ ∑e 2 2 − jπ sinθ (a cosφ +b sin φ ) .1.28) For ease in plotting.30) The magnitude of the array factor is plotted in Figure 5.
elevation and azimuthal planes) and given in half halfpower or nulltonull form. 2D Beamwidths are more difficult to specify when the array factor is two dimensional dimensional.6.21 Figure 5. It can be written mathematically as wavevector. . The sidelobe level is again the maximum value of the array factor outside of the main beam. beamwidths are specified in certain planes (for instance. as in the one null onedimensional case. Commonly. 2. Magnitude of the array factor (dB) for 2 D array. Aliasing The steering vector (v) is the vector of propagation delays (or phase changes) across an array for a given wavevector k.
there will exist distinct directions with identical steering vectors if the element spacing in the x. the main beam may be replicated elsewhere in the pattern. the array’s response towards the two directions will be identical. However. the distance between elements can be much larger than λ / 2 without introducing aliasing. then distinct frequencies cannot be resolved. When aliasing exists. if a nonuniform array is decided upon. In general. the array factor can be checked to ensure that . there may be steering vectors that are very similar so that grating lobes exist. no two distinct angles of arrival will produce identical steering vectors.31) Aliasing occurs when signals propagating in distinct directions produce the same steering vectors. where if the sampling rate is too small in time. is greater than λ / 2 .or ydirections is greater than λ / 2 . In this case. while aliasing technically does not occur.22 e − jk⋅d1 e − jk⋅d 2 . for uniformly spaced rectangular (planar) arrays with elements on the xy plane. In that case. Determining whether or not this occurs for an arbitrary array is very difficult. Similarly. v(k ) = M e − jk⋅d N (2. there will exist plane waves from distinct directions with identical steering vectors if the spacing between elements. These replicated beams are referred to as grating lobes. so that the array cannot distinguish the two directions. For uniformly spaced linear arrays. This is similar to the signal processing version of aliasing. ∆ . For arrays without a uniform structure.
. Mathematical studies on the uniqueness of steering vectors can be found in [4041].23 grating lobes do not occur.
These objectives include pattern steering. nulling energy from specific directions relative to an array.2. 0. Most of the methods described here apply to arrays of arbitrary geometry. For example. the weights would be given by wn = e For these weights. for simplicity. (3. PhasedTapered Weights The linear array of Section 2. if the array is to be steered in the direction θ d . K .17). Introduction From (2. N − 1 . Hence. adaptive signal processing methods applied to antenna arrays will be discussed. examples will be presented for uniform linear arrays with halfwavelength spacing. These techniques will be discussed in this chapter. However. the element positions will be given by d n = (0.1. ∑e n =0 (3. 1. The simplest method of altering the direction in which the array is steered is to apply a linear phase taper to the weights. or minimizing the sidelobe level outside a specified beamwidth in linear arrays. In addition.2) .III. the array factor becomes jnπ cosθd . nλ / 2) for n = 0.1) AF = N −1 jnπ (cosθ −cosθ ) d . 3. minimizing the Mean Squared Error (MSE) between a desired output and the actual output. WEIGHTING METHODS IN ANTENNA ARRAYS 3. weighting methods are well developed and can be selected to meet a wide range of objectives. Since the array factor is a linear function of the weights. it is clear that the weights will have a significant impact on the output of the antenna array.4 had maximum response in the direction of θ = 90 ° . The phase taper is such that it compensates for the phase delay associated with the propagation of the signal in the direction of interest.
or threedimensional arrays as well as for arbitrary scan angles. 3. . 42].3. Schelkunoff Polynomial Method A weighting scheme for placing nulls in specific directions of an array factor was developed by Schelkunoff [22. In general. an Nelement array can null signals arriving from N1 distinct directions. and like the result in Figure 2.9 dB down from the mainlobe.25 or 1− e jπN (cosθd −cosθ ) jπ (cosθd −cosθ ) (3. The array factor has a maximum at the desired direction.3) AF = 1− e The magnitude of the array factor (normalized so that the peak is unity.5 the sidelobes are 11. or 0 dB) is plotted in Figure 6 for N=5 and θ d = 45° . Figure 6. This simple steering method can be used in two. Array factor of steered linear array.
In that case. (3. the array factor N −1 AF = can be rewritten as a polynomial as AF ( z ) = where z=e − jπ cosθ .5) Since a polynomial can be written as the product of its own zeros.10) . n=0 (3. the following values are calculated z0 = e and − jπ cos 45° (3.7) becomes AF ( z ) = z 2 − z ( z 0 + z1 ) + z 0 z1 . the weights can be found.9) Arbitrarily letting wN −1 = w2 = 1 .7) where the z n are the zeros of the array factor. By selecting the desired zeros and setting (3.5).26 To illustrate the method. (3. it follows that AF ( z ) = w N −1 N −2 ∏ (z − zn ) . n=0 (3. As an example.7) to (3. (3. assume an N=3 element array with zeros to be placed at 45 ° and 120 ° .6) n=0 ∑ wn e − jπn cosθ (3.4) N −1 ∑ wn z n . (3.8) z1 = e − jπ cos120° .
3.4. the weights are easily found to be: w0 z 0 z1 w = w1 = − ( z 0 + z1 ) . Figure 7. In this case. uniformly spaced arrays of . for a specified main beamwidth the sidelobes should be as low as possible.11) The normalized array factor for the specified weights is plotted in Figure 7. DolphChebyshev Method Often in antenna arrays it is desirable to receive energy from a specific direction and reject signals from all other directions. the pattern has nulls at 45 ° and 120 ° . Array pattern with weights from Schelkunoff method. As desired. For linear.27 Setting (3.5).10) equal to the original form of the array factor (3. w2 1 (3.
the DolphChebyshev method will return weights that achieve this. . for details see [22]. A weighting method for obtaining minimum sidelobes in arbitrarily spaced arrays of any dimension. steered to any scan angle and for any antenna type is derived in Chapter 6. In observing array factors as in Figure 4. By matching the array factor to a Chebyshev polynomial. Note that all the sidelobes are equal in magnitude at 30 dB. As an example. To have the lowest overall sidelobe level. The result will be that for the minimum overall sidelobe level. note that the sidelobes decrease in magnitude away from the mainbeam.28 isotropic sensors steered to broadside ( θ d = 90° ). the equalripple (or constantsidelobe) weights can be obtained. which have equalmagnitude peak variations (or ripples) over a certain range. The DolphChebyshev weights are calculated for a sidelobe level of 30 dB. the sidelobes will all have the same peak value. The actual process is straightforward but cumbersome to write out. the sidelobe with the highest intensity should be decreased at the expense of raising the intensity of the lower sidelobes. a uniformly spaced linear array with halfwavelength spacing and N=6 is used. Several articles have been written on efficient computation of the DolphChebyshev weights [4344]. The associated magnitude of the array factor is plotted in Figure 8. Dolph observed this and employed Chebyshev polynomials. The nulltonull beamwidth is approximately 60 ° .
5. they have not dealt with noise or statistical representations of the desired signals or interference. Assume there exists noise at each antenna. with an associated wavevector k s . Assume now the input to the array consists of one desired signal. The noise at each antenna can be written in vector form as . 3. ni (t ) . a more general beamforming technique is developed that takes into account the statistical behavior of the signal environment.29 Figure 8. Minimum MeanSquare Error (MMSE) Weighting The weighting methods discussed previously have been deterministic. DolphChebyshev array for N=6 with sidelobes at 30 dB. In this section. s(t). that is.
31). K .14) Y (t ) = W H X(t ) . Expanding (3. The actual output is (3. G. a = 1.15) because the mathematics in the derivation will be simpler if the weights used are in the form of (3.15) where H is the Hermitian operator (conjugate transpose).13) The desired output from the antenna array (or spatial filter) is Yd (t ) = s (t ) . Using the steering vector notation for the phase delays as in (2. assume there are G interferers. 2. The meansquared error (MSE) is MSE = E[e(t )e ∗ (t )] . the input to the antenna array can then be written as X(t ) = s (t ) v(k s ) + N(t ) + ∑ I a (t ) v(k a ) .17) where * indicates complex conjugate and E[] is the expectation operator.15).15) differs from (2.17) with (3.15). (3. a =1 G (3. each having narrowband signals given by I a (t ) and wavevectors given by k a . (3. M n N (t ) (3. The error can then be written as e(t ) = Y (t ) − Yd (t ) . (3.30 n1 (t ) n (t ) N (t ) = 2 .12) In addition.16) The minimum meansquared error estimate (MMSE) seeks to minimize the expected value of the squared magnitude of e(t). Equation (3. the MSE becomes .
the MSE becomes ( )( ) (3.26) .19) can then be rewritten as (3.19) can be simplified to (3. R XX .25) MSE = W H R XX W + σ s2 − W H Λ − Λ H W .21) σ s2 = E[ s(t ) s ∗ (t )] . Multiplying the terms above.19) is just the complex conjugate of the third term: (3.23) E[ W H X(t )s ∗ (t )] = W H Λ . the fourth term in (3. The autocorrelation matrix.20) since the expectation is a linear operator and the weights are fixed. (3. Equation (3.18) MSE = E[W H X(t )X H (t )W] + E[ s(t ) s ∗ (t )] − E[ W H X(t ) s ∗ (t )] − E[s(t )X H (t ) W] .22) Λ = E[X(t )s ∗ (t )] . the third term in (3. Finally. σ s2 : (3.24) E[s(t ) X H (t )W] = Λ H W .19) is the signal power.19) E[ W H X(t )X H (t )W] = W H E[X(t )X H (t )]W . The second term in (3. The first term in (3. Defining (3. is defined to be R XX = E[ X(t )X H (t )] .31 MSE = E[ W H X(t ) − s(t ) X H (t )W − s ∗ (t ) ] . (3.19) becomes (3.
XX (3. The estimate is denoted with the bar overhead.30) Hence.27) equal to zero and solving gives the optimal weights.32 The goal is to find the W that produces the minimum MSE. a =1 (3. (3.27) Setting (3.28) requires two pieces of information. the vector Λ can be determined if the direction of the signal (given by k s ) and the signal power ( σ s2 ) are known. R −1 .29) Assuming the signal of interest is uncorrelated in time with the noise and interference. the autocorrelation matrix and the vector Λ .31) .28) Equation (3.13) yields G Λ = E[ s(t ) v(k s ) + N(t ) + ∑ I a (t ) v(k a ) s ∗ (t )] = σ s2 v(k s ) . XX (3. and uses K snapshots of the input vector X to formulate the estimate. The optimal weights can be rewritten using (3. k =1 −1 (3.26) with respect to W is ∇MSE = 2R XX W − 2 Λ . The inverse of the autocorrelation matrix is often estimated using the Sample Matrix Inverse (SMI) method.29) along with (3. Often the incoming direction and power can be determined by using a training sequence to calibrate the array. Wopt : Wopt = R −1 Λ .30) as Wopt = σ s2 R −1 v(k s ) . XX −1 XX R K = ∑ X( k ) X H ( k ) . The gradient of (3. (3.
As an example. The optimal MSE is found from substituting (3. This is because the gain in combating independent noise sources is best obtained by combining the received signals with equal gain [45]. Note that neither array factor is maximum towards the signal of interest. For the low SNR case.31). consider the case of the desired signal arriving from θ d = 110° with a signal power of σ s2 =1. Two cases will be considered.31) represents the weights that minimize the MSE. and the second with σ n = 1 (SNR=0 dB). XX (3. The array will have N=3 elements. The optimal weights can then be calculated using (3.26): MSE opt = σ s2 − σ s4 v H (k s )R −1 v(k s ) . arriving from θ 1 = 40° and θ 2 = 90° . Two interferers. the 2 2 first with noise power σ n = 0. .01 (SNR=20 dB). Observe that for the high SNR case.33 Equation (3. the pattern places nulls exactly in the directions of the interferers. θ d = 110° . each have σ I2 = 10 .32) Similar formulations can be used to formulate weights that maximize the signal to noise ratio (SNR) when the autocorrelation matrix of the interference and noise can be estimated [45].31) into (3. The resulting array factor magnitudes are plotted in Figure 9. the pattern puts less emphasis on nulling out the interferers.
6. E. the Least Mean Square (LMS) algorithm. the first and arguably most widely used adaptive algorithm is discussed. This algorithm was invented by Bernard Widrow along with M. The Applebaum algorithm [47] was developed independently in 1966 and largely uses the same ideas. Array factor magnitudes for MMSE weights. that is. they do not attempt to change as the signal environment changes. in this version (the spatial LMS algorithm). 3.34 Figure 9. and published in a primitive form in 1960 [46]. Hoff. In this section. The LMS Algorithm The weights discussed up until now have not been adaptable. A weight updating strategy that changes with its environment is known as an adaptive algorithm and adaptive signal processing has become a field in itself. the known information is assumed to be the desired signal power ( σ s2 ) and . The algorithm assumes some a priori knowledge. Jr.
the parameter λ should be chosen according to . If the environment changes. The LMS algorithm approximates the autocorrelation matrix at each time step by R XX (k ) = X(k ) X H (k ) . recall that the gradient of the MSE as a function of the weights (W) is given by (3. it has fairly decent convergence properties and has been extensively studied. k s .26). (3.34) into (3. The algorithm’s simplicity is its primary reason for its widespread use.35 the signal direction. will be ordered and written as X(k). the LMS algorithm simply increments the weights in the direction of decreasing the MSE. Samples of the input vector. The algorithm iteratively steps towards the MMSE weights.36) Equation (3. Then the gradient of the MSE can be approximated at each time step as ∇MSE (k ) = 2X(k ) X H (k ) W (k ) − 2σ s2 v (k s ) . (3. In order to have stable results (the expected MSE will converge to a constant value).35) produces the LMS algorithm: 2 W(k + 1) = W(k ) + λ σ S v(k S ) − X(k ) X H (k ) W(k ) .34) To minimize the MSE. X. { } (3. The update algorithm for the weights can then be written as W(k + 1) = W(k ) − λ 2 ∇MSE(k ) . then the algorithm will step towards the new MMSE weights. To accomplish the iterative minimization of the MSE. The versions primarily differ in the a priori knowledge required. Substituting (3.35) where λ is a positive scalar that controls how large the steps are for the weights. In addition.36) actually represents one of the many forms of the LMS algorithm.33) (3.
As an example of the LMS algorithm. some of the steps actually increase the MSE. the MSE decreases.1 .015 = 0.38) λ = 0.31)] in Figure 10. This algorithm is also fairly robust to changing environments. λ MAX (R XX ) (3.36 0<λ < 2 . along with the optimal MSE [from (3. The LMS algorithm is fairly efficient in moving towards the optimal weights for this case. The algorithm is initiated with a weight of unity applied to all elements: 1 W(1) = M . on average. and the resulting MSE is plotted at each iteration [from (3.37) where λmax (R XX ) is the largest eigenvalue of the autocorrelation matrix [48]. 1 The parameter λ is chosen to be (3.26)].3 is again considered.39) An example run is conducted. Since the algorithm uses a guess of the autocorrelation matrix at each time step. The speed of the convergence is governed by the condition number (ratio of largest to smallest eigenvalues) of the autocorrelation matrix [49]. the interference and noise scenario of Section 3. this time with a SNR=20 dB. The noise will be additive white Gaussian noise (AWGN) that is independent at each antenna. However. . The array will be the linear array of N=5 elements with halfwavelength spacing. λmax (R XX ) (3.
Several adaptive algorithms have expanded upon ideas used in the original LMS algorithm.31). Most of these algorithms seek to produce improved convergence properties at the expense of increased computational complexity. the recursive leastsquare (RLS) algorithm seeks to minimize the MSE just as in the LMS algorithm [48]. Both of these algorithms (and all others based on the LMS algorithm) have the same optimal weights the algorithms attempt to converge to. it uses a more sophisticated update to find the optimal weights that is based on the matrix inversion lemma [45]. given by (3. MSE at each iteration. For instance. . However.37 Figure 10. along with the optimal MSE.
IV. METHODS OF ANTENNA ARRAY GEOMETRY OPTIMIZATION 4.1. Introduction The field of electromagnetics was unified into a coherent theory and set of four fundamental equations by James Clerk Maxwell in 1879 [50]. These equations are known as Maxwell’s equations. The first is Gauss’s law:
∇ ⋅ D = ρV ,
(4.1)
where D is the electric flux density and ρV is the volume charge density. The second equation states that “magnetic monopoles do not exist”, and can be written in mathematical form:
∇⋅B = 0,
(4.2)
where B is the magnetic flux density. The third equation is known as Ampere’s law: ∇×H − ∂D =J, ∂t (4.3)
where H is the magnetic field and J is the impressed electric current density. The fourth is Faraday’s law: ∇×E + where E is the electric field. While there are only four equations in the set, they are complicated enough that they can only be solved in closed form for some basic canonical shapes. As a result, numerical methods for solving electromagnetic problems became necessary. A thorough introduction and survey of the methods can be found in [51]. Among the most popular of the numerical methods include the finitedifference time domain (FDTD) method developed in 1966 by Yee at Lawrence Livermore National ∂B = 0, ∂t (4.4)
39 Laboratories [52]. This method discretizes space and time and computes the electric and magnetic fields using discretized forms of Ampere’s law and Faraday’s law. The algorithm initially computes the electric fields (assuming the magnetic fields are known) using Ampere’s law. A small time step later, the algorithm computes the magnetic fields at that time using Faraday’s law (along with the calculated electric field). This process is repeated as long as desired and has been widely successful in modeling numerous electromagnetic problems. Another popular method is the Integral Equation (IE) Method of Moments (MoM), which numerically solves complex integral equations by assuming a solution in the form of a sum of weighted basis functions along the structure being analyzed. The weights are then found by introducing boundary conditions and solving an associated matrix for the weights, thereby leading to the solution [53]. Because of the difficulty in obtaining solutions to electromagnetic problems, optimization is not simple. Antenna arrays, being a specific class of electromagnetic problems, are no exception. However, significant developments over the past 50 years in the field of mathematical optimization are now being applied to electromagnetic problems. The tremendous increase in computing power over the last few decades has enabled complex problems to be solved and led to large advances in the fields of numerical electromagnetics and in optimization. This chapter describes the optimization methods that have penetrated the electromagnetic field in the late 20th century. The first set of methods, linear programming and convex optimization problems, are part of a class of optimization methods that are deterministic. The problems have a
40 unique solution that can be verified to be globally optimal. However, due to the complex nature of the problems, the solutions are obtained numerically and not analytically. The second set of methods discussed in this chapter, Simulated Annealing (SA) and Particle Swarm Optimization (PSO), are part of a class of optimization methods that are stochastic in nature. These methods produce solutions to the most general optimization problems that have very little structure and cannot be solved via other methods. The resulting solutions from these methods are unfortunately not verifiable to be globally optimal. However, they have recently been receiving a lot of attention in the antenna field because they can be applied to a wide range of problems and can be used to obtain solutions that achieve a desired performance metric. In March 2007, the IEEE Transactions on Antennas and Propagation dedicated the entire issue to optimization techniques in electromagnetics and antenna system design. An overview of the methods and their applications to electromagnetics can be found in [54]. Many of these papers used techniques that were stochastic in nature, including the popular genetic algorithm (GA)[55]. These optimization techniques are often coupled with the numerical methods discussed previously. For instance, the PSO algorithm was used in conjunction with the FDTD method in [56]. The genetic algorithm was used along with the method of moments for the design of integrated antennas in [57]. 4.2. Linear Programming The most general form of a mathematical optimization problem can be expressed as
minimize
f ( x)
subject to x ∈ χ
.
(4.5)
The variables in a linear program are written as an Ndimensional vector of real numbers: x= x1 x2 . and χ is known as the feasible set.9) Without any constraints. (4.41 Here f(x) is the objective function to be minimized. bi is a real number. The solution ( x opt ) to (4. the vector x can be any vector in ℜ N . M xN (4. or set of all possible solutions. and i = 1.5) will have the property f (x opt ) ≤ f (x) ∀ x ∈ χ . where ∀ is commonly used in mathematics to state ‘for all’. M . (4.’. where c is an Ndimensional (real) vector. ‘subject to’ will be abbreviated as ‘s. t. Each inequality can be written in the form: (4. The solution is not necessarily unique but exists as long as χ is not the empty set.6) A linear program (LP) is a widely studied optimization problem that has numerous practical applications. Each constraint in the . K . The feasible set χ in a linear program is a set of M affine inequalities. In the following.8) a T x ≤ bi . one of which is shown at the end of this section. i where a i is an Ndimensional real vector.7) The objective function to be minimized is a linear function of the problem variables: f ( x) = c T x . 2. The theory on this subject was developed by George Dantzig and John von Neumann in 1947 [58].
min cT x s.9) divides the space ℜ N into two halfspaces.11) In (4.14). A is an M × N matrix given by T a1 T a A = 2 . In two dimensions (N=2). The resulting feasible region χ is the intersection of all of these halfspaces.10) are often abbreviated as Ax ≤ b . The set of constraints T a1 x ≤ b1 a T x ≤ b2 2 M a T x ≤ bM M (4.11) is understood to be componentwise (must be satisfied for all inequalities). Ax ≤ b (4. The standard form of an LP can then be written as in (4.12) and b is an Mdimensional vector given by b1 b b= 2 .13) The inequality sign in (4.11). M aT M (4. M bM (4. t. in three dimensions (N=3) the divider is a plane and so on.42 form of (4. (4. the divider is a straight line.14) .
where C is a matrix and f is a vector. This follows the discussion in [27]. In addition. A symmetric linear array is an array with elements spaced symmetrically about the origin. To illustrate the utility of linear programs. t. so LPs are often written in the form given in (4. If no vector x satisfies all the constraints. Solutions found to (4. now have built in routines for solving linear programs.15) Cx = f Extensive work has gone into understanding the problem presented in (4.15). and they can therefore be verified to be globally optimal [59]. as shown in Figure 11. if an optimization problem can be put into the form of an LP. The results will be extended in Chapter 6 to work for arbitrarily spaced arrays with complex weights steered to any angle.43 An equality constraint can be viewed as two inequality constraints. min c T x s. have been developed to efficiently solve the LP. . several numerical methods. such as the simplex algorithm [60] and the rapid interior point method [61].15) must satisfy a set of optimality conditions.15). As a result. Ax ≤ b (4. an optimal vector (if one exists) can be found efficiently and verified to be globally optimal. including Mathematica and Matlab. Commonly used computational software programs. the problem is said to be infeasible. with arbitrary antenna elements and an arbitrary beammwidth. a method of determining sidelobe minimizing weights for symmetric linear arrays with real weights steered to broadside will be presented.
An array of this type with real weights and 2N elements will have an array factor given by AF (θ ) = n=1 ∑w N n cos(2πd n cos θ ) . The sidelobe level will be defined as the maximum value of the magnitude of the array factor outside of a specified beamwidth. The set of all angles in which the array factor is to be suppressed will be written as Θ . The objective is to determine the weights that produce the lowest possible sidelobe level. θ∈Θ (4. (4. The sidelobe level (SLL) can be written mathematically as SLL = max  AF (θ )  .17) .16) where d n is the position of the n th element along the zaxis. Symmetric linear array.44 Figure 11.
(4. t.10).20) can be written as an affine constraint as in (4. AF (90°) = 1  AF (θ i ) ≤ t . θ R ). i = 1. the following constraint is imposed: AF (90°) = 1 . Equation (4. (4. To see this.22) .19) can be rewritten into the form given in (4. K . M wN The objective function in (4. First.18) The problem of minimizing the sidelobe level can then be written as an optimization problem: min SLL s. (4. let t represent the maximum sidelobe level.21) t = [1 0 0 L 0]X = c T X . 2.20) can be rewritten as (4. following the optimization procedure.20) Each one of the constraints in (4. R (4. The sidelobes will be suppressed at the sample points.19) This problem can be written as an LP in standard form. it can be verified that the sidelobes are also suppressed between the samples. AF (90°) = 1 . t. θ 2 .45 Since the array factor is to be maximum at broadside. min t s. Sample the region Θ into R sample points ( θ 1 .20). define the problem variables to be t w 1 X = w2 . K .
R .28) . M wN or (4.16) along with the vector X as t w 1 [0 1 1 L 1] w2 = 1 .24) − t ≤ AF (θ i ) ≤ t . for i = 1. M wN or (4. the inequality constraints in (4.23) aT X = 1. i Similarly. the inequality on the left in (4.19) can be rewritten as (4. 2. 0 Finally.27) (4.25) becomes t w 1 − [1 cos(2πd1 cos θ i ) cos(2πd 2 cos θ i ) L cos(2πd N cos θ i )] w2 ≤ 0 .16).25) (4. K . M wN or (4.20) can be rewritten using (4. the inequality on the right becomes t w 1 [− 1 cos(2πd1 cos θ i ) cos(2πd 2 cos θ i ) L cos(2πd N cosθ i )] w2 ≤ 0 .26) aT X ≤ 0 . Using (4.46 The equality constraint in (4.
22). K.30) Equation (4. (4.47 f iT X ≤ 0 . (4. min c T X (4.15). the region of sidelobe suppression will be Θ = {0° ≤ θ ≤ 70° } ∪ {110° ≤ θ ≤ 180°} .33) . the optimization problem of (4. the optimal weights can be found to be w1 = 0. Hence.31) broadside cannot be found via the DolphChebyshev method. The beamwidth will be 40 ° . Finding weights that minimize the sidelobe level while directing the maximum to (4.27) and (4.29). i f iT X ≤ 0.20) can be rewritten as in (4.5λ ± 0.4967 (4. Using (4. 2.30) is in the same form as the standard LP in (4. w3 = 0.32) Using the linear programming method described in this section.29) s.1309 .2λ ± 0.24).85λ ] . R (4.30). because the array does not have uniform spacing. (4. i = 1. a T X = 1 0 a T X ≤ 0. As an example. consider the following 6element symmetric linear array with positions d T = [±0. hence. solutions can be rapidly found to this problem using numerical computational software and guaranteed to be globally optimal.3724 w2 = 0. t.
which is expected for sidelobeminimizing weights.31). Figure 12. Convex Optimization Convex optimization problems are a subclass of the general optimization problem given by (4. They have recently received a lot of attention in the engineering community because of their wide applicability. respectively. Array factor for optimal weights found via linear programming.21 dB. 4.5). The maximum sidelobe level outside the main beam is 11. while w2 and w3 are associated with the second and third pairs of positions. . The dashed vertical lines in Figure 12 define the boundary of the main beam and specify the region in which the sidelobes are suppressed. Note that the sidelobes are equal in magnitude.3. These applications include robotics [62]. The resulting array factor is plotted in Figure 12.48 where w1 is the weight associated with the first pair of positions in (4.
An excellent text for the engineering community on convex optimization has been written [66]. as in (4. A convex optimization problem is defined by two fundamental characte characteristics: the feasible set χ must be convex and the objective function f(X) is a convex function. Convex sets are convenient to work with because search algorithms can always move between the current feasible point and the optimal point without running into the boundary of the set. Examples of non nonconvex sets are shown in Figure 14. . Examples of convex sets. Examples of convex sets are shown in Figure 13. image processing [64] and information theory [65].34) where α is a scalar between 0 and 1. each set contains points x1 and x 2 such that not all points z between them. are in the set. (4. any point between x1 and x 2 can be written as z = αx1 + (1 − α ) x 2 . Figure 13. for all α in this range z must be in the set. A ) convex set is defined such that for every x1 and x 2 in the set χ .34). then all points along a straight line between x1 and x 2 are in χ .49 signal processing [63]. Mathematically.
Examples of non nonconvex sets. . (4. it satisfies the following inequality: f (αX + (1 − α )Y ) ≤ αf ( X) + (1 − α ) f (Y ) . A function is said to be convex on a set χ if for any two points X and Y in χ .50 Figure 14. The secant line between two points x and y is also drawn. This is illustrated for a onedimensional dimensional function f(t) shown in Figure 15. note that f(t) lies below this line every everywhere between x and y.35) This means that the curve of the function f will always lie below a straight line connecting the two points f(X) and f(Y).
local minimums are alwa global always minimums. For convex functions. f M ( ) .35) is replaced with a strict inequality (<).36) as F (αX + (1 − α )Y) = max f i (αX + (1 − α )Y) .36) The goal is to show that F is also convex. K . For a strictly convex function. This property makes convex functions convenient to work with in optimization. Define the function F to be the pointwise maximum of the (X set: F ( X) = max f i ( X) . f 2 ( X). the global minimum is unique. Illustration of a convex function.51 Figure 15. A function is said to be strictly convex if the inequality ( ≤ ) in (4. As an example of proving a function is convex. To accomplish this. rewrite (4. i (4. consider M convex functions given by f 1 ( X). i (4.37) .
Free convex optimization packages have been written for use with Matlab. Convex optimization problems are rapidly solvable with computers. In addition.52 Using the convexity of each function f i . i i j (4.39) show that F (αX + (1 − α )Y) ≤ α max f i ( X) + (1 − α ) max f j (Y) . examples packages include CVX and YALMIP and are available online. however. i i (4. Since these problems have a very general structure. i j (4.4. the pointwise maximum of convex functions is convex. the interior point methods developed for linear programs have been efficiently extended to convex problems [67].38) Since maximizing a sum of functions over one index must be less than maximizing each function individually. A convex optimization problem will be derived and solved in Chapter 6 which greatly extends the minimum sidelobe weighting vector of Section 4.2. putting a problem into convex form is very desirable. they tend to use .40) which proves that F is convex.37)(4. since the optimal points found can be mathematically proven to be globally optimal. These algorithms work on the most general type of optimization problems. Simulated Annealing The discussion now turns to stochastic optimization algorithms.39) Equations (4. max[αf i ( X) + (1 − α ) f i (Y)] ≤ max αf i ( X) + max(1 − α ) f j (Y) . they can be applied to a wide range of practical problems. 4. Hence. this property will be used in Chapter 6. it follows that max f i (αX + (1 − α )Y) ≤ max[αf i (X) + (1 − α ) f i (Y)] .
a candidate new point ( x i +1 ) in the feasible set is chosen using the perturbation mechanism. The algorithm evaluates the cost function at the start point. this would cause the algorithm to find a local minimum about the initial point. The SA algorithm attempts to optimize via the same procedure. usually an ordered lattice of some sort [68].5)]. The simulated annealing (SA) algorithm attempts to mimic the physical process of annealing of solids. an initial feasible point ( x1 ). ) From the current point x i . The algorithm was originally introduced in 1983 by Kirkpatrick in the journal Science as a generalization of the Monte Carlo method for examining the equations of state of nbody systems [69].42) The algorithm does not want to only accept points that decrease the cost function. This process involves heating a solid material up to a high temperature and then allowing it to cool at a very slow rate.41) ) x i +1 = x i +1 . If the new point decreases the cost function. and a perturbation mechanism for obtaining new points around the current point. then ) ∆f i = f (x i +1 ) − f (x i ) < 0 and the current solution is updated according to (4. they have recently been employed extensively in the engineering community. If the next . (4. perturbs the point to a new point. evaluates the cost function at this point and repeats.53 random searches and the results are not guaranteed to be globally optimal. The SA algorithm requires a cost function f [also known as the objective function in (4. The following discussion describes finding a minimum. However. The result is that the particles in the solid arrange themselves in the lowest energy state configuration.
then it simply remains at the previous point. the algorithm is then said to have converged. .54 candidate point increases the objective function. then almost all transitions occur and the result is a random walk through the space of points independent of the cost function. x i +1 = x i . a collection of these is described in [70] and a specific method is utilized in Chapter 5. the result is that the algorithm converges to the local minimum of the neighborhood of points that the current point resides in. and lowers the temperature slowly enough so that a satisfactory solution is found. only transitions that increase the value of the cost function occur. The algorithm is stopped once transitions to new candidate points do not occur over a large number of attempts. If T is very large. while being relatively simple to implement.44) methods of choosing this ‘cooling schedule’. The simulated annealing algorithm starts the optimization procedure at a high temperature (sufficiently high such that most transitions occur). If the algorithm does not accept the next candidate point. (4. Increased confidence that the proposed solutions are globally optimum can be obtained by running the algorithm multiple times from various initial points.43) where T represents the current “temperature” of the system. When T becomes small. The SA algorithm. requires a good deal of care in choosing the initial temperature and an appropriate cooling schedule so that a globally optimum point is likely to be found. then the probability that the algorithm updates the current solution to the candidate solution is given by − ∆f i ) P{x i +1 = x i +1 } = exp T . There are many (4.
55 4. Examples of the successful application of the PSO algorithm in the electromagnetics community include antenna design [71] and array geometry selection [72]. evolutionary algorithm capable of effectively optimizing difficult multidimensional optimization problems. or location found with the most food by any bird in the flock. Initially. Using this general procedure. each individual bird will be aware of the ‘global best’ position. it may find food in various locations. which evaluates . In this manner. The algorithm also has a cost or fitness function. The PSO algorithm consists of a set of particles (the ‘swarm’). The bird remembers its own ‘personal best’ location of where it had found food.5. which are analogous to the birds. Originally introduced by Kennedy and Eberhart in 1995 [73]. the algorithm lends itself well to parallel processing. In addition. As each individual bird travels on its path. which is an added bonus. the PSO algorithm has been gaining popularity over the genetic algorithm and other evolutionary algorithms because of its simplicity in implementation and efficient optimization. In addition. a flock of birds may start out in random directions searching for food. The PSO algorithm translates this behavior into a mathematical algorithm for optimization. a flock of birds will descend on the region in the area that has a relatively high amount of food available. the bird may periodically fly up and survey the progress of the other birds in the flock. Particle Swarm Optimization (PSO) The PSO algorithm is a stochastic. The PSO algorithm attempts to mimic the behavior of birds or bees in obtaining a food source.
The global best value will also be recorded. and t is an integer specifying the current time step. It will be assumed in the following discussion that there is a feasible set χ to be optimized over in which each element can be represented as an Ndimensional real vector. This is the location that the current particle has found to be the fittest (minimum) so far along its trajectory. The personal best positions will be written for the i th particle as p i . Each particle will also be aware of the ‘global best position’. The i th particle at time t will be at the location given by x ti . and it will be written as G = min Pi . In addition. where i is an integer between 1 and M. The algorithm is iterative and the locations will change at each time step. and the corresponding fitness value for each of these positions will be written as Pi . i (4. The i th particle at time t will have a velocity written as v ti . Each particle will also have a randomly selected initial velocity vector. The number M depends on the dimension and difficulty of the problem. this is analogous to a bird evaluating how much food is in a certain location. The algorithm starts with M particles selected at random positions within the feasible set. and a fitness function f : χ → ℜ which can evaluate each position to a real number. each particle will record the location of its ‘personal best position’.45) . which is the position that has been found to be the fittest so far from among all the particles and will be written as the vector g.56 the current position of each bird. and is one of the parameters left to the algorithm implementer.
and c 2 is a real number that accelerates the particle towards its global best position. The algorithm then updates the velocity and position of each particle at every time step until the simulation is stopped. giving the personal and global best positions and values. The position is then updated according to x ti = x ti −1 + v ti −1 . the fitness value for each of the positions is evaluated.47) where wV is a real number called the ‘inertial weight’.48) .0 is a good choice for both parameters c1 and c 2 [74]. t U ai = M O M t 0 0 0 L u aN (4. The inertial weight w is a real number in the range [0. c1 is a real number that accelerates the particle towards its personal best position. and each u ai is an independent uniformly distributed real variable on [0. The velocity is then updated at each time step according to t v ti = wV v ti −1 + c1U1i (p i − x ti −1 ) + c2 U t2i (g − x ti −1 ) . first define matrices U 1i and U t2i according to t u a1 0 0 L 0 t 0 u a 2 0 L 0 . (4. but is stochastically drawn towards the particle’s previous best location and the swarm’s global best location.1] that controls how much the updated velocity depends on the previous velocity.1]. (4. t To perform the updates.57 Once the random initial positions and velocities have been chosen for each particle. each particle moves in a random fashion around the solution space. In this manner.46) t where a is equal to 1 or 2. Studies on PSO have shown that 2.
58 and the fitness function is evaluated at each of the new locations. The biggest drawback to the method is that the resulting solutions cannot be verified to be globally optimal. the personal best and global best positions and values are updated if possible. Finally. and then the process repeats again. This algorithm is relatively simple to implement but performs well on general optimization problems in comparison to other evolutionary algorithms. .
The terms that remain in (5.V. and the signal direction. The input is a summation of noise. The autocorrelation matrix. XX (5. Introduction In this chapter. which can occur intentionally (as in jamming) or unintentionally (as in wireless devices sharing a frequency band). 2 The signal power. The problem of interference is addressed. The input to the array then depends on the positions of the array. The steadystate weights many adaptive algorithms (for instance.32): MSE opt = σ s2 − σ s4 v H (k s )R −1 v(k s ) . LMS and RLS) converge to are the MMSE weights in (3. the signal .1) which is a fair measure of the performance of an adaptive array. σ S .1) are the steering vector v( k s ) and the autocorrelation matrix R XX . the desired signal and the interference from various directions. is a function of the inputs (X) to the antenna array.1. ARRAY GEOMETRY OPTIMIZATION FOR INTERFERENCE SUPPRESSION 5. the influence of array geometry on the performance of adaptive antenna arrays is examined by solving a specific wireless communication problem. The steering vector can be rewritten from (2.21). defined in (3. The optimal MSE is rewritten from (3. given by k s .1) the weights are absent because the optimal weights have already been substituted into the expression. are part of the wireless environment and cannot be changed. v (k s ) = e M e − jk s ⋅d N (5. In (5.2) which is a nonlinear function of the array element positions.31) as e − jk s ⋅d1 − jk s ⋅d 2 . the noise power.31).
Interference Environment Military communication systems will potentially be used in environments with a large amount of cochannel interference from sources intending to impede communication. The arrays are not operating in a unique situation in which the interference is from known directions. D. an antenna array is suitable for blocking interference spatially separated from the desired signal direction. the MSE in (5. Hence. For example. the question of determining an optimum geometry for an adaptive array arises. 5. the autocorrelation matrix can only be altered by changing the array geometry. Other arrays used in more dynamic environments may expect interference from all directions with equal probability. which is the subject of this chapter. Since an antenna array cannot control the power incident upon it. By optimizing an array geometry with respect to an interference environment. a cell phone tower would expect the interference to be confined to a fixed range of elevation angles (directed towards the ground) and would not be concerned with blocking interference from the sky. As a result. In this situation. it is possible to minimize the expected (or average) interference power that is not rejected by the array. Instead.1) is actually only a complicated function of the geometry of the array.2. A specific .60 power and the powers of the interferers. it would not be prudent to optimize the array geometry for a specific interference situation (example: 3 interferers from 3 distinct angles). the concept of an interference environment will be introduced as a statistical characterization of the expected directions and relative power of the interference. Naturally.
4. As a simple example. Each situation would have an autocorrelation matrix associated with it. (5. the expected autocorrelation matrix.3. Recall the definition of the autocorrelation matrix: R XX = E[ X(t )X H (t )] .3) where the expectation is over time. Each unique interference situation in which the array operates will have a unique autocorrelation matrix. Optimization for Interference Suppression (5. which is written as R XX1 and R XX2 . A noteworthy observation is that if all the antenna elements have the same physical orientation. (5. suppose an array was operating in an environment in which interference occurred from one of two distinct angles of arrival with equal probability. This is similar to a sidelobe cancellation system [47] and is also the method used in a 7element adaptive . then S X can alternatively be found by treating the elements as isotropic sensors while simply adjusting the power levels in the interference environment. then the array’s primary goal is to minimize the output power while restricting one of the weights in the array to be unity.4) where the expectation operator is now over the interference situations (which defines the interference environment).61 example of an interference environment will be detailed in Section 5.5(R XX1 + R XX2 ) . Then the expected autocorrelation matrix is S X = 0. is now defined as S X = E I [ R XX ] . The task is now to derive an optimization problem whose solution yields an optimal array geometry. S X . Going one step further. 5.5) If it is assumed that the interference has a larger power than the signal of interest or that there are many interferers.
By using the power minimization technique. The output power from the array at any time is y ∗ y =w H XX H w . the array can greatly reduce the amount of interference power that makes it into the next stage of processing (usually a temporal filter). Note that power minimization does not attempt to place the maximum of the array factor towards the signal of interest. it will be required that the separation between elements be at least λ / 4 .62 array developed by Raytheon for combating interference in GPS systems [75]. (5.6) P = w H R XX w . however. when the interference power is much stronger than the power of the desired signal.7) A measure of the average output power for a given interference environment P is then P = w H S XX w . the average output power P is then (5. as it does not require estimating the direction of arrival of the signal of interest or its power.8) One of the weights is restricted to be unity so that the power minimization algorithm does not set all of the weights to zero. The advantage of this technique is its simplicity. Let rij be the separation between elements i and j. this technique produces weights close to those produced using the MMSE weights. The problem of finding an optimal array for interference suppression can be written as in optimization problem. This is a suboptimal technique in regards to the MSE. For a fixed interference situation. for practical reasons such as minimizing the effects of mutual coupling. In addition. . (5.
the optimal weight vector for this problem can be found by using Lagrange multipliers. Assuming that S X is invertible. The minimization variables are the complex weights and the values of rij . (5. the weights can be solved from (5.10) becomes ∇L = 2S X w opt + Λe1 = 0 .9) H w opt e1 = − Λ T −1 e1 S X e1 = 1 .12) yielding the powerminimizing weights. w H e1 = 1 rij ≥ λ / 4. The Lagrangian can be expressed as (5.9).11) − Λ S −1 e 1 X .13) can be substituted into (5. 2 (5.12) The parameter Λ can be determined by invoking the equality constraint of (5. min w H S X w s. which follows from the definition of an X X autocorrelation matrix. (5. . where w opt are the powerminimizing weights. Note that e1 = [1 0 0 L 0] .13) and the property (S −1 ) ∗ = S −1 was used.63 T given in (5. The solution to (5. for i ≠ j Assuming the locations of the antenna elements are known (or held fixed). Λ) = w H S X w + Λ(w H e 1 − 1) . t. 2 (5.9) L(w.10) Taking the gradient with respect to w of the Lagrangian and setting the result to zero.11) as w opt = (5.
for i ≠ j . Since the interference environment cannot be controlled.9).16) is only a function of the antenna locations and the interference environment.15) over all array geometries that meet the (5. The notation [Z] mn will be used to represent the element of the matrix Z from the m th row and n th column.14) Substituting (5.16). because S X is invariant to translation (shifting the elements uniformly).16) will not be unique. e1 S X e1 w The goal is now to minimize (5. so that the optimization problem can be written as T max e1 S −1e1 = [S −1 ]11 X X s.15) is always nonnegative because an autocorrelation matrix is always positive semidefinite [76]. (5.10) can be rewritten as a maximization problem as in (5.15) is equivalent to maximizing the reciprocal. Hence. Minimization of (5. The . the minimum value of the objective function for a fixed geometry becomes 1 min{w H S X w} = T −1 . this is true whenever a function is strictly nonnegative.15) constraints in (5.64 w opt S −1e1 = T X −1 . and if a matrix is positive semidefinite then its inverse will be as well [77]. rij ≥ λ / 4.14) is only a function of the antenna locations. Equation (5. t. the minimization problem in (5. positive semidefinite matrices always have nonnegative quadratic forms [77].14) into the objective function of (5. The solution to (5. e1 S X e1 (5.16) The objective function in (5. the performance of the array using the optimal weights of (5.9).16) subject to the specified constraints. The optimal element locations are those that maximize the objective function of (5.
This example assumes M interferers and takes the interference to be mutually independent and arriving from a uniform distribution in the azimuth direction ( φ ∈ [0. while s n (t ) and v n (k n ) are the signal and the steering vector for the n th interferer. hence the response of the antenna can be lumped into the received power. the steering vectors will be rewritten as a function of the azimuth angle ( φn ) only. it will be assumed that the antenna elements do not have a pattern that varies much in the azimuth direction. the input to the array becomes X(t ) = (5. Finally. φ ) is the element pattern for each antenna. (5. π ] measured down from the zaxis towards the plane of the array).0). i = 1. Planar Array with Uniform Interference at Constant Elevation As an example of the solution to (5. For simplicity.18) simplifies to X(t ) = ∑s n =1 M n (t ) v n (φ n ) . yi .17) ∑ f (θ .2π ] measured counterclockwise from the xaxis). This assumption allows the element factor to be eliminated because the interferers are assumed to come from a fixed elevation angle.16). but from a fixed elevation angle ( θ ∈ [0. N . respectively.16) is what needs to be solved in order to determine an optimum array geometry for a given interference environment. With M interferers. 5. K. 2. The elements will be located at positions in the xy plane given by ri = ( xi .4. a planar array of N elements with the same physical orientation is considered.19) .18) where f (θ . so that (5.φ )s n =1 M n (t ) v n (k n ) .65 optimization problem in (5. (5.
For m=n. (5.23) and the fact that the interference is uniformly distributed in the azimuth direction.22) Equation (5. The expected value of (5. it follows that the components of the autocorrelation matrix become E[(s n (t ) v n (φ n ) )(s n (t ) v n (φ n ) ) ] ab H − j (kx xa +k y ya ) j (kx xb +k y yb ) ∗ = E s n (t ) s n (t )e e −j 2 = σne 2π λ sinθ (cosφn ( xa − xb )+sin φn ( ya − yb )) . (5.23) is taken. M −j 2 n 2π [R XX ]ab = ∑σ n =1 e λ sinθ (cosφn ( xa − xb )+sin φn ( ya − yb )) .20) (5.21) ∗ because E[ s n (t ) s m (t )] = 0 for m ≠ n .20) along with (5.23) The expected autocorrelation matrix can now be calculated from (5. it follows that E [ (s n ( t ) v n ( φ n ) )(s m ( t ) v m (φ m ) ) ] = 0 H (5. n = 1 n = 1 Assuming the interference to be independent. resulting in .66 The autocorrelation matrix becomes R XX = E[ X(t )X H (t )] H M M = E ∑ s n (t ) v n (φ n ) ∑ s n (t ) v n (φ n ) .22) gives the components of the autocorrelation matrix.
the term φab will not contribute to the integral and can be arbitrarily set to zero without influencing the result.25)(5.26).26) (5. the following variable substitutions are made: ( xa − xb ) = Rab cosφab ( y a − yb ) = Rab sin φ ab . Rab is the distance between elements a and b.24) In order to evaluate the integral in (5. as .24) can be rewritten using (5. the integral in (5. (5.24) becomes 2π − j 2π Rab sinθ cos(φn −φab ) dφ n . The Bessel function of the first kind of order n can be written in integral form as [78] j −n J n ( x) = 2π 2π (5.27) and substituting (5. (5.28) ∫ e j ( x cos φ + nφ ) dφ .25) and (5. = ∑σ n ∫ e λ 2π n =1 0 M (5. ∫e λ 2π 0 Since the integral in (5.29) 0 Hence. Using the trigonometric identity cos(u − v) = cos(u ) cos(v) + sin(u ) sin(v) .27) into (5. In (5.29).67 E[R XX ]ab = 2π M ∫ ∑ σ n2 e −j 2π λ sinθ (cosφn ( xa − xb )+sin φn ( ya − yb )) 0 n =1 dφ n 2π 2π − j 2π sinθ (cosφn ( xa − xb )+sinφn ( ya − yb )) dφ n 2 . (5.24).28) and (5.28) is over a complete cycle for φn .25) (5.24).
If the perturbation moves the elements too close to each other ( < 0.5. while a small variance will lead to a long simulation time. and not on the number of interferers.16) defines the optimization problem used to determine an array for suppressing interference. Ideally. an elevation angle of θ = 90 ° is chosen for the interferers. The 2N components of this vector are added to the xy coordinates of the current array. the initial array chosen will have no effect on the optimization result. λ n = 1 Equation (5.positions of the Nelements. The perturbation mechanism is implemented by choosing a random vector in ℜ 2 N that has an Euclidean norm that is zeromean and with a small variance. then the perturbation is discarded and a new perturbation selected. a large variance will lead to an imprecise search of the solution space.30) shows that the expected autocorrelation matrix depends on the total (5.68 M 2π 2 [S X ] ab = E[R XX ]ab = ∑ σ n J 0 R ab sin θ . which represents the x.30) along with (5. as discussed in Section 4. 5. The candidate arrays (or points in the feasible space.30) interference power incident on the array. For simplicity. Equation (5.25λ ). The variance is chosen such that the average perturbation for each element is on the order of 0. In using the SA algorithm. Another constraint .4) at every time step are represented by a real vector in ℜ 2 N . a circular array is chosen as the initial array.and y.01λ . A method of determining an optimal array is the subject of the following section. Using Simulated Annealing to Find an Optimal Array The Simulated Annealing optimization algorithm described in Chapter 4 was found to be suitable for the problem at hand.
The simulation for the 6element array described below was performed using MATLAB on a computer with a 2. and the solution time was approximately 8 hours. This process is performed until T is small enough that no perturbations that decrease the objective function are accepted (recall the optimization problem is one of maximization). the parameters used were u=0. the tradeoff lies in the computational time needed. P=50 000. and T0 =12. The method of determining these numbers was to use small values of u. The initial temperature T0 was chosen such that virtually all (>99%) of perturbations are accepted. The solution array is then again perturbed P times. if they are decreased. The optimum array configurations for the N=4. As u. 5 and 6 element arrays are found using the above optimization procedure and plotted in Figures 16. and increase them until the simulations consistently returned the same solution starting from various initial arrays. this method will converge to the global optimum [68]. 17 and 18. the probability of a correct (globally optimal) solution increases. the solution is less likely to be optimal.69 is imposed such that all elements stay within 0. If P is sufficiently large and the temperature decreased sufficiently slowly. and T0 are increased. The temperature is held constant for a fixed number (P) of perturbations. The results suggest .75λ of the origin (center of the initial circular array) to keep the search space finite. once this happens the solutions has converged upon a local maximum. respectively. However.99. P and T0 .9 GHz processor. P. The dotted circles in these figures are of radius 0.25λ . This number was chosen to be large enough such that the resulting optimal arrays were not altered by this constraint. The temperature is then multiplied by a factor u<1. In the solution for N=6.
Optimum N=4 element array (measured in units of λ ). Figure 16. They all have a center element and are surrounded by a circular array of radius 0. This suggests a tradeoff between interference suppression and largely spaced arrays used for diversity or to minimize mutual coupling. .70 that the interference suppression capabilities are best for arrays spaced as closely as possible.25λ (the minimum distance allowed).
.71 Figure 17. Optimum N=5 element array (measured in units of λ ).
5λ . Evaluating the Performance of the Optimal Arrays In order to illustrate the performance of the optimum array. The output power ( w H R XX w ) is calculated when the weights are given by the optimal weight vector for this specific instance. and a rectangular array with interelement spacing 0. it will be compared to three other standard arrays: a circular array with radius chosen such that the spacing along the circle between elements is 0. Interferers from six different angles are chosen.5λ oriented along the zaxis. a linear array with interelement spacing 0.6. 5. Optimum N=6 element array (measured in units of λ ). given in (5. 360 ° ]) and all at the same elevation angle ( 90 ° ). This is the steadystate solution the adaptive . each randomly selected from a uniform distribution (on [0.31).5λ as suggested in [79].72 Figure 18.
2 dB Relative Power (N=5) 0 dB 16. it is well known that the optimal sampling strategy to avoid aliasing for circularly .73 powerminimization algorithm would converge to in practice. The interesting thing about the optimal 7element array is that it is a hexagonally sampled planar array.1 dB 12. The results are listed in Table I. w opt = R −1 e1 XX T e1 R −1 e1 XX (5. After viewing Figures 1618. The optimum arrays performed significantly better on average than the standard arrays used in practice.2 dB 32. one may easily conjecture what the optimal 7element array would be. much more than reasonably expected.5 dB 7.8 dB Table I illustrates the dramatic effect that array geometry can have on the interferencesuppression capabilities of the array.31) This process is repeated 100. TABLE I OUTPUT POWER COMPARISON AMONG DIFFERENT ARRAYS Array Optimal Circular Rectangular Linear Relative Power (N=4) 0 dB 11.5 dB 24.4 dB 37.9 dB 20. clearly showing their superior interferencesuppression capabilities. The output powers for the standard arrays are much higher than those for the optimal arrays. The optimization procedure is applied and confirms the solution to be that as given in Figure 19. if the first weight is fixed at unity. where the average output power is given relative to the power allowed by the optimal array.000 times to form an average output power for this type of interference environment. In multidimensional digital signal processing.4 dB Relative Power (N=6) 0 dB 32.
the problems are strongly related and the optimal solutions come out the same (for the case of a circular interference environment). The method of this paper confirms that the GAS1 geometry used is optimum for the case of a planar array in a circular interference environment. Optimum N=7 element array (measured in units of λ ).575 GHz) and L2 (1.227 GHz). Hence.74 bandlimited signals is a hexagonally sampled lattice [80]. The elements of the GAS1 array are circular patches each operating at the dual frequencies of L1 (1. The array geometry in Figure 19 is the layout of the 7element GAS1 (GPS Antenna System) array developed by Raytheon whose primary function is to suppress interferers or jamming [75]. . Figure 19. This parallel strengthens the methods and procedures applied in determining optimal arrays. while sampling for reconstruction and sampling for interference suppression are fundamentally different.
In Case 1.000 times to form an expected SIR. the primary disadvantage is that the desired signal may be muted along with the interferers. The results for Case 1 are given in Table II. The weight vector used for each case is the vector that minimizes the MSE. n (5. Twelve interferers are selected from a fixed elevation angle ( θ I = 90° ) and a random azimuth angle and 30 dB InterferencetoSignal Ratio (ISR). The process will be repeated (random interference directions selected) 100. Case 3 will be the same as Case 1 except the signal arrives from θ d = 90° .32) where S n and I n are the output signal power and the output interference power for the n th situation. given by ∑S SIR = ∑I n n n . respectively. To get an idea of the signal to interference ratio (SIR) at the output of the array. and rectangular array as before. given in (330). 6. a few test cases are considered.75 The method derived above seeks to minimize output power. The resulting SIRs for N=5. While this has its advantages. Case 2 will be the same as Case 1 except the signal arrives from θ d = 0° . The results for Case 3 are presented in Table IV. The results for Case 2 are presented in Table III. linear. and 7 elements are determined for the optimal array along with a circular. . the desired signal arrives from θ d = 45° and φ d = 0° .
25 dB 10. like the N=7 arrays for Cases 1 and 2.1 dB Relative SIR (N=6) 0 dB 2.64 dB 4. The linear array has some advantage in blocking interference in that it has a smaller field of view than the other 2D arrays.5 dB 0.98 dB Relative SIR (N=6) 0 dB 11. In some situations.45 dB 6.4 dB 15. However.41 dB 2.9 dB 27.6 dB Relative SIR (N=7) 0 dB 4.8 dB 5 dB Relative SIR (N=6) 0 Db 10.15 Db 19. when the signal is in the same plane as the interferers .29 dB The results given in Tables IIIV show that on average. This is exhibited by the slightly superior results seen for Case 1 and 2 when N=5.24 dB Relative SIR (N=7) 0 dB 35.9 dB 5.76 TABLE II RELATIVE SIR FOR CASE 1 Array Optimal Circular Rectangular Linear Relative SIR (N=5) 0 dB 6. the optimal array boosts the SIR compared to the other arrays.6 dB 15.3 dB 6.3 dB 11.9 dB 25.82 dB Relative SIR (N=7) 0 dB 26.2 dB 21 dB TABLE IV RELATIVE SIR FOR CASE 3 Array Optimal Circular Rectangular Linear Relative SIR (N=5) 0 dB 11.13 dB 19.15 dB TABLE III RELATIVE SIR FOR CASE 2 Array Optimal Circular Rectangular Linear Relative SIR (N=5) 0 dB 3.85 dB 5. the optimal array produces significant SIR gains compared to the standard arrays.45 dB 7.
Summary To briefly summarize the chapter. In addition. . an optimization problem (5.77 (Case 3) the linear array performs poorly. Optimizing an adaptive antenna array’s geometry can be done by defining an interference environment. it does indeed raise the output SIR as desired. the results implicitly show that the array geometry used has a significant effect on the array’s performance. or expected directions and level of interference. while the optimization problem was set up to minimize output power and thereby reduce interference.7. and a method of solution was demonstrated using the Simulated Annealing optimization algorithm.16) has been derived whose solution yields an optimal array for suppressing interference. the array is not optimized for a specific situation. 5. In this manner. Therefore. A specific problem of a circular interference environment was studied. but rather optimized to maximize the performance on average based on the expected environment the array is to operate in.
In addition. Recently. [17] used the Particle Swarm Optimization (PSO) method to determine optimum sidelobeminimizing positions for linear arrays assuming the weights were constant. steered to an arbitrary angle is the goal of this chapter. Their ubiquity in textbooks and actual applications is partly due to the relative ease with which they are analyzed.[17] the authors force the arrays to have symmetry about the center to keep the array factor real. However. Introduction Onedimensional arrays have been extensively analyzed. the arrays will have no bounds on minimum or maximum element separation.5 where a minimum element separation is needed. The most important for the discussion of this chapter is the DolphChebyshev weighting method. the work in this chapter does not require this restriction.VI. Determining this for a linear array of arbitrary elements. the question of determining the minimum possible sidelobe level for an Nelement linear array has yet to be determined. Optimizing geometry for sidelobe minimization has also been examined via a range of techniques. the weights and positions of a linear array will be optimized to lower sidelobes. as discussed in Chapter 1. a method of finding the optimum weights for minimizing the sidelobe level is derived given: • a beamwidth . For a given linear array.1. In this chapter. In [13]. dating back to the early part of the 20th century. MINIMUM SIDELOBE LEVELS FOR LINEAR ARRAYS 6. Methods of weight selection were discussed in Chapter 3. except in Section 6. This method can determine minimumsidelobe weights for uniformly spaced linear arrays of omnidirectional antennas.
The positions found via PSO are likely to be globally optimal as discussed in Section 6.K d N ) .79 • • • the array element’s positions the individual antenna’s radiation pattern a desired direction ( θ d ) for the array to be scanned. This problem will be posed in convex form. d 2 . Incoming plane waves are characterized by an angle θ (measured from the zaxis) that specifies their direction of arrival. . where d n is the position of the n th element measured from the origin along the zaxis. This information can be used by array designers in determining how well their arrays perform compared to the best design possible.2. Problem Setup The basic geometry of a onedimensional linear array is shown in Figure 20. to determine if altering the weights or element positions could potentially return a significant improvement in performance. The element positions will be unrestricted and the space will be extensively searched via PSO in order to find optimum positions in conjunction with the corresponding optimum weights. thus it can be solved without searching through the space of weights as in [28]. 6. Consequently.4. the results presented here likely represent global bounds on the minimumpossible sidelobe levels achievable for a given beamwidth. The positions of a onedimensional Nelement linear array can be written as a vector d= (d1 .
the array factor is a function of the weight vector and the positions of the elements. θ ) is then the product of the array factor and the element pattern. The sidelobe level of an array intrinsically depends on the element positions (d) and the weights (w). Assuming a vector w=( w1 . The total radiation pattern T (w.1) where k = 2π / λ . Basic setup of a linear Nelement array. θ ) = f (θ ) AF (w.θ ) = ∑w e n= n 1 N jkd n cos θ (6. the array factor (AF) can be rewritten from (2.20) as AF (w. wN ) of complex excitation weights.2) where f (θ ) is each elements’ radiation pattern. d. This chapter addresses determining the minimum possible sidelobe level of an Nelement array for a given beamwidth. It will be assumed that the elements are identical and oriented in the same direction. given by T (w. Determining the minimum possible sidelobe level of an Nelement . d. (6. The problem can be extended to arrays with distinct antenna elements in a straightforward manner. w2 . θ ) .80 Figure 20. For a given angle θ . d. d.K.
The beamwidth is then the angular range in which the radiation pattern is not to be minimized. Letting Θ represent the angles in which the radiation pattern is to be suppressed. . Section 6.θ )  . To accomplish this. Since the optimum weights can be determined for every linear array. the sidelobe level (SLL) can be written mathematically as SLL = max  T (w. a new method must be developed to determine the optimal sidelobeminimizing weights.4).3) The normalized radiation pattern is constrained to be unity towards the desired direction ( θ d ). θ ∈Θ (6. the problem reduces to finding the optimum positions that (along with the corresponding optimum weights) yield a sidelobe level that no other combination of weights or element positions can improve upon. The sidelobe level will be defined as the maximum value of the total radiation pattern outside of the main beam.81 linear array consists of finding the optimum combination of weights and element positions that minimize the sidelobe level. The optimum sidelobeminimizing weights are therefore the solution to the optimization problem given in (6.3. however. their results only apply for symmetric arrays and realvalued weights. 6. Determination of Optimum Weights for an Arbitrary Linear Array For nonuniformly spaced linear arrays.3 determines optimum weights for a linear array with arbitrary element positions. d. In [27]. Let the positions of an arbitrarily spaced Nelement linear array be described by the vector d. an optimization procedure using linear programming (LP) was developed for sidelobelevel minimizing weights.
8). To accomplish this. . This problem can be put into a fairly simple convex optimization form. Minimizing the magnitude of the total radiation pattern at a fixed position θ i . given in (6.4) In (6. and the complex weight vector w is the variable in the problem. d. (6. w1IM . w N . ti and si ). C N is the set of all Nelement vectors with complex components. n =1 N (6.θ M . K .4). d. RE IM can be written as an optimization problem (with variables w1RE . which is rapidly solvable and solutions are guaranteed to be globally optimum. θ )} = f (θ ) ∑ {wn cos(kd n cos θ ) + wn sin (kd n cos θ )} . d. T (w. d. n =1 N (6. w N . RE IM wn = wn + jw n .5) The real part of the total radiation pattern can then be written as RE IM Re{T ( w. θ 2 . θ )} = f (θ )∑ {wn cos (kd n cos θ ) − wn sin (kd n cos θ )} .6) and the imaginary part as IM RE Im{T ( w. while the beam is maximum in direction θ d .θ )  w ∈ C N θ ∈ Θ s.K.θ d ) = 1 (6.82 min max  T (w. the weights are first expressed in terms of their real and imaginary parts.t.7) Next. Θ is partitioned into M discrete sample points θ1 . Selection of the sample points is discussed at the end of the section.
d.83 min t i2 + s i2 s.t.sinp iN . d. the constraints on the imaginary part of the total radiation pattern can also be expressed in this form.sinp 1i L cosp iN .θ i )} ≤ t i − si ≤ Im{T (w. d.θ d )} = 0 (6. − t i ≤ Re{T (w. cos p 1i .8) on the real part of the total radiation pattern can be written as a linear i inequality (for notational simplicity. (6. Hence.cos p 1 sinp 1 L . ti and si are dummy variables.cosp N sinp N .θ i )} ≤ si Re{T (w.10). d.8) In the above optimization problem. let pn = kd n cos θ i ). The inequality constraints in (6.θ d )} = 1 Im{T (w.10) .t.1 0 w1RE 0 IM w1 0 M M RE w N ≤ 0 IM .8) can be rewritten into the simpler form given in (6. w N 0 t 0 i s i 0 (6.1 0 f (θ i ) i i i i . A i Z ≤ 0 BZ = 0 (6.9) Similarly. min t i2 + s i2 s.
11) Adding the constraints for all M positions to be included in the matrix A. it follows that . A i is the matrix that describes the inequality constraints. and 0 is a vector of zeros. which is easily solved numerically [81]. This procedure can be accomplished at every location θ i for which the total radiation pattern is to be suppressed. min max t i2 + s i2 i =1.10). To have a maximum at θ d .10) can be extended into the form given in (6.12). wN . AZ ≤ 0 BZ = 0 (6. a necessary condition is for the derivative of the squared magnitude of the radiation pattern to be zero. Since t i2 + s i2 is a convex function for all i .12) does not guarantee that the magnitude of total array radiation pattern is a maximum at θ d (the maximum could be anywhere outside the region Θ ). Writing T (θ ) = F (θ ) + jI (θ ) in terms of its real and imaginary parts. Finally. Z is a vector of the problem variables ( RE IM w1RE . The optimization problem of (6. the problem in (6. the pointwise maximum in (6.12) is also a convex function (as derived in Chapter 4). w2 . si ).t. w1IM .L. (6. ti .12) This problem will minimize the array factor at all desired locations. B is a matrix that describes the equality constraints.K. Extend the vector Z to include the weights and all the dummy variables: RE IM Z T = [ w1RE w1IM L w N w N t1 t 2 L t M s1 s 2 L s M ] . M s. (6.10) is a simple quadratic program.84 In (6.
the remaining values can be selected fairly sparsely.13) At θ d . a spacing of 5° between sample points is sufficient. (6.θ d )} = 0 . or via commercial software such as MATLAB (for example. the result is that (6.12) represents a convex optimization problem and is therefore solvable. Once the weights are determined. and the solutions represent global optima [66]. which implies that the squared magnitude of the total radiation pattern has zero derivative if F′ = d Re{T (w. the equality constraints force F=1 and I=0. This problem can be solved via a standard numerical optimization routine. the method is sufficient to have the maximum of the radiation pattern in the direction θ d (at least for all cases considered in this chapter). dθ (6. When this is implemented. even in between the sampled points. and it can be added to the matrix B. The only two values of θ that need to be selected are those that define the boundary of the main beam. d. In theory. In summary. Usually.85 dT ⋅ T ∗ dT ∗ dT =T +T∗ dθ dθ dθ = ( F + jI )( F ′ − jI ′) + ( F − jI )( F ′ + jI ′) . The only question left then is the selection of the number of sample points. In . it is desirable to choose sample points as closely spaced as possible to guarantee the radiation pattern is suppressed.14) This constraint is also a linear constraint on w. the total radiation pattern can be plotted to show that the sidelobes are indeed suppressed as desired. using the function fminimax). M.
4. as will be seen in the Section 6. and it is advantageous to choose a sparse sampling to speed up the computation time.86 practice. The discrepancy results from the DolphChebyshev method using null to null beamwidth and suppressing sidelobes outside of that region. 6. However. 2 ∆ ° = 30 ° . θ )  . The sidelobe level (SLL) for a given weight vector w can be expressed in dB as SLL (w ) = max 20 log10  T (w. the global bound on sidelobe level for linear arrays. and Case 2 will have a relatively small beamwidth. then the region Θ in which the radiation pattern is to be minimized can be written as Θ = {θ : 0° ≤ θ ≤ θ d − ∆ or θ d + ∆ ≤ θ ≤ 180°} .16) Two cases will be analyzed in this chapter. Suppose the desired beamwidth is 2 ∆ ° . the results are the same . The goal is to determine the optimum element positions for minimizing sidelobes. A minimum interelement separation of 0. broadside ( θ d = 90° ) Nelement linear arrays of omnidirectional [ f (θ ) = 1 ] antennas are considered. this method suppresses sidelobes outside a specified beamwidth. The method developed in this section will return weights almost identical to that of the DolphChebyshev method for uniformly spaced linear arrays of omnidirectional antenna elements.15) (6. the difference is extremely small and the weights are the same to at least three significant digits for the arrays in question. Case 1 will have a large beamwidth 2 ∆ ° = 60 ° . which isn’t necessarily null to null.25λ can be enforced. Broadside Linear Array In this section. d.5. θ ∈Θ (6. the method works with sparse spacing. and consequently. However.
the optimum weights are calculated and the sidelobe level is determined. d. Each run of the algorithm is executed for approximately 20200 iterations. the reader is referred to [17. Since for any array configuration d the optimum weights can be found. a search method needs to be employed that is rapid (without excessive computation time) and accurate (solutions are consistent and no other method leads to a better solution).87 whether or not this constraint is applied since the elements will tend to separate largely from each other to achieve low sidelobes. While no method short of an exhaustive search can guarantee a global optimum for problems such as these (many local minima in the objective function). For a detailed discussion of the details of PSO that goes beyond the elementary discussion in Chapter 4. or long enough such that several iterations no longer . the problem is a function of a single variable ( d 2 − d1 = element separation and hence a global optimum can easily be found). For linear arrays with more than two elements. For each position vector. intelligent use of PSO will give high confidence that the solutions are indeed globally optimum.0 . The arrays are described by a vector of element positions. The algorithm is run using the PSO parameters set to wV = 0. 82]. and the process is repeated. the PSO is used by initially choosing a set of P random arrays (these are the particles used in PSO). The Particle Swarm Optimization (PSO) method was found to be suitable for this task.5 .0 . For the results presented in this work. c1 = 2. the problem now becomes one of finding the best antenna element positions to minimize the SLL. The element positions are updated via the PSO technique. For an array with two elements (N=2). and c 2 = 2.
P is increased until consistency is achieved. The number of particles required appears to grow roughly as N! which indicates PSO cannot consistently return optimum solutions for large arrays.88 decrease the objective function. the optimum arrays are very regular. TABLE V NUMBER OF PARTICLES REQUIRED FOR CONVERGENCE FOR VARYING ARRAY SIZE WITH SIMULATION TIME Array Size (N) 2 3 4 5 6 7 P 2 4 16 100 700 3000 Time (Hours) 0. this method is fairly certain to return the globally optimum array if the number of particles is large and repeated application of the algorithm returns identical arrays.05 0.0 GHz processor running MATLAB (speedup by approximately a factor of S can be obtained if the code is written for Sprocessors in parallel). The weights for this case are allowed to be complex. Case 2 has . For both cases. The number of particles P required for regular convergence is given in Table V.3 7 50 300 Results for broadside arrays of size N=27 elements are presented in Tables VIIX (note that d1 = 0 for all arrays in this chapter). Table V also gives the approximate time per simulation on a single 3. with either uniform or nearly uniform spacing.2 1. but are found to be real valued. Because the arrays (or particles) start from random positions every time. If repeated use of the algorithm does not produce identical results. and because the number of antenna elements is small. The weights given in Table VII are identical to the DolphChebyshev weights (to at least 3 significant digits).
083 0.89 a larger array length.45 51.083 0.429 0.667 0.333 d5 d6 d7 0.166 0.667 3.166 0. θ d = 90° ) Error! Objects cannot be created from editing field codes.247 0.62 TABLE VII OPTIMUM WEIGHTS FOR CASE 1 (BW= 60°. θ d = 90° ) N 2 3 4 5 6 7 w1 0.290 0.500 0. However.90 27.500 0.333 1.227 w4 w5 w6 w7 0.000 SLL (dB) 6. The magnitude of the array factor for both cases is shown in Figure 21 for N=6.000 2.024 w2 0.21 62.247 0. N 2 3 4 5 6 7 d2 d3 1.000 2.107 w3 0. the method in [15] correctly determines the optimum sidelobe level for this problem.000 2.286 0.154 0.154 0.340 0. consistent with results developed in [83].286 0. this will not hold in future scenarios. and the results for N=7 are shown in Figure 22.333 1.333 1.227 0.02 16. TABLE VI OPTIMUM ELEMENT POSITIONS (IN λ ) FOR CASE 1 (BW= 60°.346 0.667 2.667 0. in which the beamwidth is reported to decrease with increased array length.044 0.346 0.667 0.044 0.05 39.333 1.667 0.286 0.333 4.107 0.667 0.024 .290 0.000 2.333 3. Because the results for Case 1 have uniform spacing.667 2.667 2.
500 0.060 .178 3.794 0.286 0.346 0.383 2.059 0.59 12.178 3.589 1.083 0.794 0.90 TABLE VIII OPTIMUM ELEMENT POSITIONS (IN λ ) FOR CASE 2 (BW= 30°. N 2 3 4 5 6 7 d2 d3 1.429 0.247 0.290 0.127 0.97 4.589 1.383 2.044 0.130 0.762 SLL (dB) 1.794 0.24 18.227 0.383 3.589 1.198 w4 w5 w6 w7 2 3 4 5 6 7 0.15 30.166 0.290 0.95 6.154 0.500 0.794 0.589 d5 d6 d7 0.247 0.97 3.589 1.154 0.794 0.794 2.286 0.18 24.199 0.178 3.340 0.044 0.346 0. θ d = 90° ) Error! Objects cannot be created from editing field codes.07 TABLE IX OPTIMUM WEIGHTS FOR CASE 2 (BW= 30°. θ d = 90° ) N w1 w2 w3 0.083 0.166 0.383 2.
91 (a) ( BW = 60°. θ d = 90°) (b) ( BW = 30°. θ d = 90°) Figure 21. θ d = 90°) Figure 22. Magnitude of array factor for optimal arrays (N=7). (a) ( BW = 60°. Magnitude of array factor for optimal arrays (N=6). . θ d = 90°) (b) ( BW = 30°.
a minimum separation of 0. Consequently.475 0.762 1. and Figure 24 plots the results for N=7. However.250 d3 1. This problem is solved in an identical manner to that in Section 6.76 5. for this problem.74 12. The positions found are very irregular. X∠Y is equal to X cos(Y ) + jX sin(Y ) .707 d5 d6 d7 1. Comparing these results to Section 6.250 0.25λ ) for many of the cases.725 1.512 1.4. The elements favor having at least one element separation being the minimum allowable ( 0.250 0. TABLE X OPTIMUM ELEMENT POSITIONS (IN λ ) FOR CASE 1 (BW= 60°. it is clear that it is more difficult for a linear array to have low sidelobes when it is scanned away from broadside. For clarity. as in [30]. The magnitude of the array factor for both cases is shown in Figure 23 for N=6. Array Scanned to 45 Degrees Suppose now the goal is to find the optimum array pattern for an arbitrarily spaced Nelement linear array scanned to 45 ° from broadside ( θ d = 45° ).912 2.975 1. the arrays tend to favor closely spaced elements. The weights found in this section are complex. Error! Objects cannot be created from editing field codes.4.5. The optimum element positions and the weight vectors are given in Tables XXIII. θ d = 45° ).250 0.92 6.162 SLL (dB) 0.60 .64 19.44 8. N 2 3 4 5 6 d2 0.508 0.442 1.25λ between elements was enforced.432 1.
258 ∠257 ° 0.425 ∠43 ° 0.287 ∠30 ° 0.167 ∠37 ° 0.292 ∠272 ° 0. N=2 N=3 0.057 ∠25° w3 w4 w5 w6 w7 TABLE XII OPTIMUM ELEMENT POSITIONS (IN λ ) FOR CASE 2 (BW= 30°.415 ∠250 ° 0.427 ∠279 ° 0.345 ∠246 ° 0.146 ∠44 ° 0.293 ∠260 ° 0.501∠231 ° 0.231∠51 ° 0. θ d = 45° ) Error! Objects cannot be created from editing N d2 d3 d5 d6 d7 SLL (dB) .850 2. θ d = 45° ) (a) Results for N=24.297 ∠227 ° w3 w4 w1 w2 (b) Results for N=57.386 ∠5° w1 w2 N=4 0.935 23.172 ∠132 ° 0.00 TABLE XI OPTIMUM WEIGHTS FOR CASE 1 (BW= 60°.237 ∠121 ° N=7 0.359 ∠63 ° 0.195 ∠48 ° 0.169 ∠145 ° 0.22 ∠135 ° 0.250 0.93 7 0.465 ∠248 ° 0.275 ∠292 ° 0.195 ∠143 ° 0.734 1.499 0. N=5 N=6 0.108 3.408 ∠21 ° 0.369 1.304 ∠45 ° 0.
94 field codes.186 ∠169 ° 0.056 0.617 3.657 3.250 0. θ d = 45° ) (a) Results for N=24.252 ∠247 ° 0.322 ∠224 ° 0.34 ∠211 ° w1 w2 N=4 0.469 ∠47 ° 0.653 2.030 1.218 ∠248 ° 0.5∠224 ° 0.926 2.21∠42° 0.976 2.335 ∠248 ° 0.130 2.355 ∠223 ° 0.162 ∠7° w3 w4 w5 w6 w7 .39 12.250 0. N=5 N=6 0. 2 3 4 5 6 7 0.195 ∠175 ° 0. N=2 N=3 0.245 ∠69 ° 0.18∠238 ° 0.50 0.92 8.31 3.361∠64 ° 0.298 ∠226 ° 0.77 6.165 ∠111 ° 0.068 ∠251 ° N=7 0.36 TABLE XIII OPTIMUM WEIGHTS FOR CASE 2 (BW= 60°.250 0.535 0.298 ∠37 ° 0.123 0.806 4.23∠49° 0.293 ∠228 ° 0.373 2.524 ∠248 ° 0.85 10.250 2.385 ∠70 ° 0.112 3.191∠330 ° w3 w4 w1 w2 (b) Results for N=57.302 0.196 ∠49 ° 0.933 2.163∠164 ° 0.362 4.380 2.
θ d = 45°) (b) ( BW = 30°. θ d = 45°) (b) ( BW = 30°.6. Magnitude of array factor for optimal arrays (N=6). θ d = 45°) Figure 23. 6. Magnitude of array factor for optimal arrays (N=7). (a) ( BW = 60°.95 (a) ( BW = 60°. Array of Dipoles Scanned to Broadside . θ d = 45°) Figure 24.
788 2.681 d3 1.793 0.069 2.696 0. this time assuming the antennas are short or ideal dipoles having a normalized radiation pattern f (θ ) = sin θ .362 d5 d6 d7 2.500 0.15 22.17) For this case.391 1.382 1.087 2.40 56. (6.043 2.50 66.116 2.4.448 .38 34.500 0.693 0.96 The broadside case is considered again. The resulting optimum element positions and weights are given in Tables XIVXVII. N 2 3 4 5 6 7 d2 0. The magnitude of the total radiation pattern for both cases is shown in Figure 25 for N=6. and Figure 26 gives the results for N=7.441 3.405 4. θ d = 90° ) Error! Objects cannot be created from editing field codes.4. The problem is solved again as in Section 6.452 1.755 2.90 TABLE XV OPTIMUM WEIGHTS FOR CASE 1 WITH DIPOLES (BW= 60°. θ d = 90° ) N w1 w2 w3 0.403 1.00 45.086 SLL (dB) 11.276 w4 w5 w6 w7 2 3 0.703 0. no minimum separation is required since the elements tend to spread out as in Section 6.726 0.724 3.276 0. TABLE XIV OPTIMUM ELEMENT POSITIONS (IN λ ) FOR CASE 1 WITH DIPOLES (BW= 60°.
341 0.043 0.876 0.227 0.351 0.723 1.023 0.273 0.97 4 5 6 7 0. θ d = 90° ) N w1 w2 w3 0.93 15.082 0.292 0.343 0.227 0.860 d3 1.246 0. θ d = 90° ) Error! Objects cannot be created from editing field codes.58 TABLE XVII OPTIMUM WEIGHTS FOR CASE 2 WITH DIPOLES (BW= 30°.463 3.233 4.70 27.500 0.283 0.951 0.148 0.362 4.516 2.779 1.70 21.351 0.106 0.270 0.288 0.528 3.565 2.023 TABLE XVI OPTIMUM ELEMENT POSITIONS (IN λ ) AND SLL FOR CASE 2 WITH DIPOLES (BW= 30°.192 5.024 SLL (dB) 4.347 0.165 0.087 0.139 0.500 0.143 0.130 0.675 2.043 0.106 0.238 0.915 0.681 1.842 0.150 0. N 2 3 4 5 6 7 d2 1.090 .224 0.706 d5 d6 d7 2.358 3.64 9.220 0.900 1.224 0.239 w4 w5 w6 w7 2 3 4 5 6 0.178 0.079 0.313 0.25 0.168 0.50 33.165 0.224 0.292 0.
(a) ( BW = 60°.231 0.196 0.98 7 0. θ d = 90°) (b) ( BW = 30°.208 0. θ d = 90°) (b) ( BW = 30°.122 0. θ d = 90°) Figure 25.062 (a) ( BW = 60°. Magnitude of the total radiation pattern for optimal arrays of dipoles (N=6). Magnitude of the total radiation pattern for optimal arrays of dipoles (N=7). .134 0. θ d = 90°) Figure 26.047 0.
4. Consequently. it is reasonable to model the coupling as a linear system. so that an array of dipoles has lower sidelobes than an array of omnidirectional radiators. This coupling affects the radiation pattern of the elements which can degrade the overall radiation pattern [84].19) . Hence the relationship between the ideal and actual array outputs can be written as X actual = CX ideal . this helps to lower the overall sidelobe level. (6. The elements in this section have a larger spacing than the broadside array considered in Section 6. the output of the array will be written as X actual .4. 6.4. the output of the array will be written as X ideal . This matrix can be modeled as [85] C = Z L (Z + Z L I) −1 . In this section. show that the optimum linear array element positions (and associated weights) will be different depending on the type of antenna elements used in the array. (6. When mutual coupling is present.18) where C is a square matrix known as the mutual coupling matrix. The individual dipole’s radiation pattern works to lower the total radiation pattern away from broadside. Because of the linearity in Maxwell’s equations. the extent to which the above results vary due to mutual coupling is considered.99 The results of this section. Without mutual coupling. Mutual Coupling Mutual coupling is present in all antenna arrays to some degree.7. This is evident by comparing the results of this section and Section 6. compared with the results of Section 6.
100 where Z L is the load impedance in each element. In (619), Z is the impedance matrix, which relates the current into each antenna to the voltage, V=ZI. (6.20)
If the mutual coupling matrix is known, then the array input can be premultiplied by the inverse of the coupling matrix to obtain the decoupled weights, as in (6.21).
X = C −1 X actual
Since the output of the array is given by
(6.21)
y = w H X = w H C −1 X actual ,
the optimal weights derived in section 6.3 can be replaced by w ′ = C −1 w opt . opt
(6.22)
(6.23)
The resulting total radiation patterns will then be the same as presented here, to the extent that the mutual coupling matrix model is correct. Experimental results by Huang et al. [85] suggest that the model performs fairly well. A circular array of dipoles was considered in that work, which will necessarily have a strong degree of mutual coupling because the elements are each in line with the other element’s direction of maximum radiation. Using (6.23), the compensated radiation patterns for the arrays were compared to the ideal or noncoupled case and the results were found to be in agreement. Hence, it is expected that arrays without such a strong degree of coupling (as commonly used in practice), can also be accurately modeled using (6.18) and (6.19). 6.8. Conclusions This chapter determined the limits of performance on linear arrays of size N=27. A method of determining globally optimum weights for minimizing sidelobes for a given
101 linear array was presented. The elements were then varied in position until it was certain that a global optimum was found. Consequently, it is very likely that no other weight strategy or element placement scheme will lead to sidelobes lower than those presented in this paper. These results can be used as a benchmark in comparing existing array performance to determine if it is worth updating the array placement or weighting strategy.
VII. MINIMIZING SIDELOBES IN PLANAR ARRAYS 7.1. Introduction The natural next step in studying sidelobeminimization in antenna arrays is to look at twodimensional or planar arrays. In this chapter, many of the ideas from Chapter 6 are extended to twodimensional arrays, which are mathematically similar but the total radiation pattern is more complex. Sidelobe minimization has received renewed interest due to the difficult nature of the wireless channel. To block interference, it is best to place nulls in the direction of the interference. However, this often does not work well in practice. For example, the European standard for 3rd generation of mobile communication is known as Wideband Code Division Multiple Access (WCDMA). In this scheme, the same frequency spectrum is shared simultaneously by all users; for an indepth description, see [86]. Consequently, interference is a major problem. These systems are designed to work with a large number of users, and since an Nelement antenna array can only place N1 nulls, an impractically large number of antennas would be needed to null out signals from all directions not of interest. In addition, due to multipath effects, each signal will be arriving from several distinct angles, which further reduces the performance of a nulling based approach. In [87], the performance of arrays with different weighting methods used in WCDMA systems are compared. It was found that for a large number of interferers, a low sidelobe method will outperform a nulling method. The low sidelobe method is also preferred because no processing needs to be performed to determine directionofarrivals for varying signals. Hence, as the capacity requirements of wireless
Other wideband weighting methods use antenna coefficients that vary with frequency in order to improve the radiation pattern over a range of frequencies.5. which is a major restriction. In addition. and with a recursive filter in [90]. In addition. In this chapter.4. the author develops a wideband weighting method that works for 2D rectangular arrays.2. Previous work has been performed in an attempt to develop wideband sidelobeminimizing weights. the results are clearly suboptimal in viewing the resulting sidelobes. In [88].103 communication systems increase. In Section 7. the weights will continue to be constant (not a function of frequency) so that the weights derived are easily implemented in a real system. However. The chapter is organized as follows. optimal arrays and sidelobe levels . In Section 7. wideband arrays will be studied in the latter half of this chapter. methods for reducing sidelobe levels will become increasingly important. This is done using a tapped delay line filter in [89]. which increases the utility of the method. To address this. In Section 7. sidelobeminimizing weights are developed for twodimensional arrays of arbitrary elements. the twodimensional sidelobe minimization problem is addressed.3. the results of that work assume all signals arrive from a fixed elevation angle. The sidelobeminimizing weighting methods will be extended to work for wideband arrays. This method does incorporate the antenna patterns into determining the weight vector. optimal arrays and sidelobe levels are obtained for arrays of omnidirectional antennas of size N=47 for two distinct beamwidths. In Section 7. the WCDMA systems should not be modeled using a singlefrequency or narrowband total radiation pattern.
Arbitrary planar array. Optimal arrays and sidelobe levels are obtained for wideband arrays of omnidirectional antennas in Section 7. The output or radiation pattern of an antenna array (or spatial filter) is given by T (w . The position of the n th element is d n = ( xn . k y )e j ( k x xn + k y yn ) (7.1) .7. D. Figure 27. at z=0. TwoDimensional Symmetric Arrays The elements of the array are assumed to lie in the xy plane. Finally. 7.104 are obtained for arrays of patch (or microstrip) antennas.0) as shown in Figure 27.6.9. and conclusions presented in Section 7. A method of determining weightminimizing sidelobes over a range of frequencies is developed in Section 7. k y ) = n=1 ∑w N n f n (k x . k x .2. in Section 7. y n .8 optimal arrays and sidelobe levels are obtained for wideband arrays of patch antennas.
k y ) = f (k x . k x . D. This chapter deals with twodimensional arrays with symmetry about the origin. It is desired for the array pattern to have a maximum at the desired direction. there exists another element in the array with the same weighting coefficient located at − d n = (− xn .3) A method of sidelobe minimization for symmetric linear arrays with real weights was given as an example in Section 4. This constraint keeps the array factor real when the weights are real.3). denoted (k xd . the array is to be steered toward θ d = 0 or (k xd . and f n (k x .2) if there is an element at the origin (odd number of elements).105 where wn is the weight multiplying the signal of the n th element. k y ) . . k y ) = w1 f (k x . k yd ).0) .2. if an element is located at d n = ( xn .2) T (w.3.− y n . and to have a specified beamwidth for the main beam. y n . It will be assumed that the array positions D are known. k x . ( N +1) / 2 T (w. the radiation pattern will have the form given in (7. When the antennas in the array are identical (have the same individual radiation pattern). k y ) is the antenna gain for the n th element in the direction determined by (k x . 0) . If there is an even number of elements. SidelobeMinimizing Weights for TwoDimensional Arrays (7. That is. k y ) + f (k x . k yd ) = (0. D. k y ) N /2 n =1 ∑ 2w n=2 n cos(k x x n + k y y n ) (7. The results are now extended for twodimensional arrays of nonisotropic elements.0) and not at the origin. the radiation pattern for the entire antenna array takes the form given in (7. k y ) ∑ 2wn cos(k x xn + k y y n ) 7. allowing efficient computation of the results.
The cutoff value dictates how wide or narrow the array’s mainbeam is to be. k y ) space in which the sidelobes are not to be suppressed. The suppression region can be specified as 2 2π 2 2 2 Θ = (k x .2. k y ) : k c ≤ k x + k y ≤ . and a circular transition region will be assumed. The suppression region Θ is now twodimensional. k y ) space. . The suppression region is illustrated in Figure 28 in (k x . For a circular transition region. k y ) plane with magnitude less than 2π / λ is commonly referred to as the visible region. The region in the (k x . k y ) outside of this region do not correspond to any value of (θ . the cutoff region occurs when θ ≥ θ c . λ (7.4) where a cutoff value ( k c = 2π sin θ c / λ ) is specified. Values of (k x . φ ) at the frequency of interest.106 The discussion will follow the development in Section 4. The transition region is defined as the region in (k x .
5) can be rewritten as affine inequalities exactly as done in (7. it is suppressed even between sample points. D. Hence. Each sample point is denoted by (k xi . t.107 Figure 28. R . D. k yi ) for i = 1. as seen in (7. K .5) .5) are again linear functions of the weights. Suppression region for twodimensional arrays. min t s. 2. the constraints in (7. 2. T (w. The suppression region can be sampled at R places as in Section 4. k xi .5).27. i = 1.2. The parameter R is chosen sufficiently large such that when the resulting radiation pattern is plotted. 0.3). The LP problem of (420) is rewritten to include the nonisotropic element pattern in (7. 0) = 1  T (w. K. k yi ) ≤ t . R The constraints in (7.
When the array is steered to broadside. The beamwidth is 30 ° . Using this beamwidth to define the suppression region as in Figure 28.2. the optimal weights can be determined.9 dB. As an example. in Figure 29 (elevation plot for φ = 0° ) and Figure 30 (elevation plot for φ = 45° ).108 Section 4.9 dB.77 λ . A weighting method with a linear phasetaper (from Section 3. the sidelobe level for the linear phasetapered array is 10. consider the 7element hexagonal array of Figure 19. The optimal array factor is plotted. The positions and optimal weights are listed in Table XVIII.2) will be used for comparison. and therefore rapidly solvable. The sidelobe level for the array factor with the optimal weights is 13. The result is that the problem of finding the optimal sidelobesuppressing weights for twodimensional array of nonisotropic elements is again a linear program. but with a radius of 0. along with the array factor using the phasetapered weights. .
. Array factors for optimal weighted and phasetapered array ( φ = 0° ).109 Figure 29.
133 7.4 SidelobeMinimizing Weights for Scanned TwoDimensional Arrays In this section.133 0. It is assumed that the array is scanned towards (θ d . 0) ± (0.110 Figure 30. 0. so that the wavevector components in the desired direction are k xd = 2π sin θ d cos φ d λ (7. 0) (0. Array factors for optimal weighted and phasetapered array ( φ = 45° ). φ d ) .6) .385.667) (0. the procedure of Section 7. TABLE XVIII OPTIMAL WEIGHTS FOR 7ELEMENT HEXAGONAL ARRAY Position Weight ± ± (0.385. 0.200 0.3 is extended for twodimensional arrays scanned from broadside.667) 0.77.133 0.
3 must be used.2.04 dB. The method of Section 7. Once the suppression region has been specified as in Figure 28.7) Two beamwidths are specified. The first is the polar (elevation) beamwidth ∆θ . φd ) = (90°. The magnitude of the array factor is plotted in Figure 31. As a result.3. . 0°) . ∆θ / 2 = 68 . which is the beamwidth when the azimuth angle is fixed at φd . An example will now be presented using this method. the method can be directly implemented for the twodimensional case. A 5x5 rectangular array with uniform spacing of λ / 4 is used. in which the weights are real. The maximum sidelobe level is 12.2° . Hence. as discussed in Section 3.111 k yd = 2π sin θ d sin φ d λ (7. The second is the azimuth beamwidth ∆φ . The results are compared to the performance of an array with a linear phase taper. will not produce optimal weights when the array is to be steered from broadside. which is the beamwidth when the polar (elevation) angle is fixed at θ d .7° and ∆φ / 2 = 38. the complex method of Section 6. The array is scanned to (θ d . The beamwidth selected for determining the optimal weights is identical to the result for the linear phase taper array for comparison.
AF for phasetapered weights. k y ) plane. The following parameters are defined that indicate the boundary of the suppression region in the (k x . k y ) plane.11) Increasing θ for a fixed azimuth angle is equivalent to moving outward in the radial direction in the (k x . (b) azimuth plot. .10) k yθ = 2π (7. k y ) plane: k xφ = 2π ∆φ sin θ d cos φ d + 2 λ ∆φ sin θ d sin φ d + 2 λ (7.9) k xθ = 2π ∆θ sinθ d − cos φ d 2 λ ∆θ sinθ d − sin φ d 2 λ (7.8) k yφ = 2π (7. Increasing φ for a fixed elevation angle is equivalent to moving in a circle (at a fixed distance from the origin) in the (k x .112 Figure 31. (a) elevation plot.
Figure 32.3 with the suppression region in Figure 32. The sidelobe level is reduced to 31. . in Figure 33 for an azimuth scan with a fixed elevation angle ( θ = 90 ° ). the suppression region Θ has the form plotted in Figure 32 for this example. showing the superiority of this method over the linear phasetaper method. The array factor using the optimal weights is plotted. Employing the method of Section 6. the optimal weights can be determined.2 dB. The elevation scan is plotted in Figure 34 with a fixed azimuth angle ( φ = 0° ). and they are listed along with their respective positions in Table XIX.113 Consequently. along with the array factor using the phasetapered weights. Suppression region for an array scanned away from broadside.
108 ∠244 ° (0.25) 0.5) 0.114 TABLE XIX OPTIMAL WEIGHTS WITH ASSOCIATED POSITIONS Position Weight Position Weight (0.103 ∠246 ° (0. 0) 0.25) 0.25) 0. 0.5.196 ∠143 ° (0.5.25. 0. 0. 0.5. 0.227 ∠227 ° (0.25.5.25. 0) 0.5) 0.5.25.184 ∠118 ° (0.5.204 ∠22 ° (0.25) 0. 0.25) 0. Azimuth plot of array factors with optimal and phasetapered weights.25.020 ∠288 ° (0. 0. 0. 0) 0. .137 ∠113 ° (0.25. 0.418∠4° Figure 33.192 ∠31 ° (0.182 ∠246 ° (0.25.5. 0.185 ∠350 ° (0.090 ∠234 ° (0.182 ∠246 ° (0. 0. 0.25.5) 0.060 ∠137 ° (0. 0.210 ∠350 ° (0. 0.25) 0.5) 0.174 ∠346 ° (0.5) 0.166 ∠355 ° (0.5.25. 0.5. 0) 0. 0.066 ∠126 ° (0.211∠142 ° (0.5.126 ∠235 ° (0.5) 0.5) 0. 0.25) 0.5) 0.291∠256 ° (0.179 ∠133 ° (0. 0.25) 0.25.067 ∠168 ° (0.5) 0.25) 0.5) 0.180 ∠221 ° (0. 0.25) 0. 0. 0) 0.
(7.115 Figure 34.3). k y ) = 1 . The parameters and method of implementation used are the same as in Chapter 6. Symmetric Arrays of Omnidirectional Elements In this section. so that f (k x . The particles move around and interact via the PSO algorithm.5. the antennas are omnidirectional.2. subject to the arrays being symmetric. The positions will be varied using the PSO algorithm as in Chapter 6 in order to determine an optimal geometry for minimum sidelobes. accompanied with the optimum weights of Section 7. As the element positions are varied. 7. as in (7. yield the minimum sidelobe level. the optimum weights are . Elevation plot of array factors with optimal and phasetapered weights.2) and (7.12) The array elements are allowed to assume an arbitrary geometry. The goal of this section is to determine the geometry that. The algorithm is again run with P particles.
. along with the average simulation time. the beamwidth will be 60 ° . A linear array cannot suppress sidelobes in two dimensions because the pattern of a linear array is only a function of one variable. Results will be presented for arrays of size N=47.14) For symmetric arrays. the weights and the array geometry are simultaneously optimized. this indicates the sidelobes are to be suppressed when θ ≥ 30 ° .13) For Case 2.0 GHz processor running MATLAB.116 calculated for each particle (or array) at every iteration. The number of particles required for regular convergence is given in Table XX. it is likely that the results are globally optimal. Two cases are considered in this section. λ λ (7. The sidelobes will therefore be suppressed when θ ≥ 15 ° . The number of particles is increased until successive runs of the algorithm return identical results. The simulations were performed on a 3. Because the algorithm starts with random and independent particles (or arrays) every time. a smaller beamwidth of 30 ° is considered. and because the algorithm consistently returns identical solutions. as the symmetry forces the arrays to be linear. The cutoff value can be calculated to be k c1 = 2π π sin(30°) = . The cutoff value is then kc2 = 2π 0.518π sin(15°) = . there exists no optimal 2 or 3 element symmetric arrays. In Case 1. λ λ (7. Consequently.
5.6. In Figure 36.117 TABLE XX NUMBER OF REQUIRED PARTICLES FOR PSO AND COMPUTATION TIME FOR N=47 N 4 5 6 7 P 290 300 800 1000 Time (hours) 3 3. as expected with a circular suppression region. as discussed in Section 5. For N=4.5 7 10 The optimal arrays for Case 1 are plotted in Figure 35. the magnitude of the array factor is plotted as a function of the elevation angle θ for several azimuth angles. The mainbeam is almost identical within the transition region ( θ ≤ 30 ° ) for distinct azimuth angles. The optimal positions are listed with the sidelobe levels in Table XXI. The corresponding optimal weights are given in Table XXII. The result for N=6 is distinct. the optimal arrays are close to being circular with an increasingly large radius. This indicates a circularly symmetric mainbeam. 7. as it takes a cross shape. The optimal array when N=7 is also a hexagonal array. and a center element if the array has an odd number of elements. .
24.45.5 6.9 13.52) (0.9 (0. 0) (0. 0. 0) (0. 0.67) (0. Optimal symmetric array locations for Case 1 (dimensions in λ ).67) 5. 0) (0. 0) (1. y 2 ) ( x4 .118 Figure 35.67. TABLE XXI OPTIMAL SLL AND POSITIONS FOR CASE 1 (DIMENSIONS IN λ ) ( x3 . 0) (0.39. 0. 0) (0.67) (0. 0) (0. 0. 0. y 4 ) SLL (dB) N=4 N=5 N=6 N=7 (0.77.9 7.39. y 3 ) ( x1 .67) . y1 ) ( x2 .41.
200 0. indicating that the results can have significant variance depending on the beamwidth. The optimal arrays for Case 2 are plotted in Figure 37.400 0. The results for N=6 differ significantly between the two cases.133 0. The optimal weights are listed in Table XXIV. the difference being the center element.268 0.159 0.133 0. The arrays for this case are similar to the results for Case 1 except they are spread out farther. N=7.442 0. Note that the addition of this . which is expected for a narrower mainbeam.183 0. The positions and sidelobe levels for this case are presented in Table XXIII. Magnitude of T (θ ) at distinct azimuthal angles (Case 1). The results for Case 2 for N=6 and N=7 are almost identical.183 0.133 Figure 36.556 0.444 0.119 TABLE XXII OPTIMAL WEIGHTS FOR CASE 1 w3 w1 w2 w4 N=4 N=5 N=6 N=7 0.
Figure 37. . The mainbeam is again identical when θ ≤ 15 ° for distinct azimuth angles. The extra complexity introduced by adding a seventh element in this case would not be very beneficial. indicating a circular mainbeam. This information would be advantageous to an array designer in determining the number of elements needed to achieve a sidelobe level. The magnitude of the array factor at distinct azimuthal angles is plotted in Figure 38 for the N=7 array.1 dB.120 element only lowers the sidelobe level by 0. Optimal symmetric array locations for Case 2 (dimensions in λ ).
46. Magnitude of T (θ ) at distinct azimuthal angles (Case 2). y1 ) ( x2 .79. 0.46.80) 1. 0) (0. 0) (0.661 0.183 N=6 0.400 N=7 0.79) (0.92. 0. 0) (0. 0.80) (0.159 0. 0.46.133 Figure 38.7 5.8 TABLE XXIV OPTIMAL WEIGHTS FOR CASE 2 w3 w1 w2 w4 N=4 0. y 3 ) ( x1 . y 2 ) ( x4 . y 4 ) SLL (dB) N=4 N=5 N=6 N=7 (0.183 0.121 TABLE XXIII OPTIMAL SLL AND POSITIONS FOR CASE 2 (DIMENSIONS IN λ ) ( x3 .46.2 5. 0.49.200 0.133 0.80) (0.268 0. N=7.133 0. 0) (0.80) (0.442 0.9 3.339 N=5 0. 0) (0.70) (0. 0. 0) (0. .92.
5 λ . The pattern here is complicated enough that it is highly unlikely that an analytical weighting method can be developed that minimizes the sidelobes in an patch array. the patch dimensions are chosen to be W=L=0. which is the region below the patch.3. adding .17) In this section. W is the width of the patch. the far field components of the electric field are approximately given by (7. Symmetric Arrays of Patch Antennas In this section.6. using the development of Section 7.2.10). k is the freespace wavenumber. The method of solution is identical to that in Section 7. and L is the length of the patch. However. In (7.15) and (7. φ ) = θ E 0 Eφ + E 0 2 2 (7. For θ > π / 2 . The directivity for this antenna can be numerically calculated to be 9. When a microstrip or patch antenna has a thin dielectric.16) The normalized pattern to be used for f (θ . symmetric arrays of patch antennas steered to θ d = 0 are considered.16). The patch pattern is plotted in Figure 39 for two fixed azimuth angles.122 7.15) (7.97.1) will be E f (θ . the radiated fields will assumed to be zero.34 dB. when the polar angle θ ≤ π / 2 [91]. φ ) as in (7. kW sin θ sin φ sin 2 cos kL sin θ cos φ cos φ Eθ = E 0 kW sin θ sin φ 2 2 kW sin θ sin φ sin 2 cos kL sin θ cos φ cos θ sin φ Eϕ = − E0 kW sin θ sin φ 2 2 (7.
123 the pattern does not significantly increase the difficulty of determining the optimal weights. Using the solution method discussed previously. Using a realistic.3 are again considered here. The optimal arrays of patch antennas are skewed . Magnitude of patch pattern (in dB). complicated antenna pattern helps to highlight the utility of the LP method of weight selection. the optimal arrays for Case 1 (beamwidth equal to 60 ° ) are presented in Figure 40. The two cases discussed in Section 7. (a) φ = 0° (b) φ = 90° Figure 39.
The corresponding optimal weights are listed in Table XXVI. The directivity of this array is evaluated numerically to be 17. The results for N=4. The narrower mainbeam leads to more spread out arrays. The result for N=6 is a significant departure from the omnidirectional case.67 dB. which indicates that the antenna pattern must be taken into account in determining an optimal geometry. . 5 and 7 are fairly similar to the Case 1 results of omnidirectional elements. The positions and optimal sidelobe levels for Case 1 of patch elements are listed in Table XXV. as seen in Case 2 of Section 7. The arrays are more spread out (or elongated) because the array factor effectively has a narrower mainbeamwidth (due to the patch pattern which decreases in magnitude for θ > 0° ). however they are slightly rotated and more spread out.3.124 somewhat owing to the nonisotropy of the antenna pattern. The magnitude of the radiation pattern is plotted in Figure 41 as a function of θ for three distinct azimuth angles for the N=7 array.
57) (0.85. .37.04) 12. . 0.125 Figure 40.5 (. y 2 ) ( x4 . y 4 ) SLL (dB) N=4 N=5 N=6 N=7 (0.68) (0.00. 0. .61. 0.1 16.54. 0.49.50.42) (0.46.34.6 21. y1 ) ( x2 . Optimal symmetric patch array locations for Case 1 (units of λ ).53) (.00.74) (0. .01) (0.5 13.42) (0. 0. y 3 ) ( x1 . TABLE XXV OPTIMAL SLL AND POSITIONS FOR CASE 1 OF PATCH ELEMENTS (UNITS OF λ) ( x3 .00) (0.77) .65.32.00) (0. .69) (0.0. 0.
Case 2 (beamwidth equal to 30 ° ) is considered with patch elements. . Next. The arrays for this case are fairly similar to the results of Case 1.233 0.256 0.234 0.191 0. N=7 (patch). The result for N=6 is more symmetric and less football shaped.132 0.218 0.244 0.124 0.133 0.193 0. while the result for N=7 is again a hexagonally sampled array. Magnitude of T (θ ) at distinct azimuth angles (Case 1). except for again being spread out farther.126 TABLE XXVI OPTIMAL WEIGHTS FOR CASE 1 WITH PATCH ELEMENTS w3 w1 w2 w4 N=4 N=5 N=6 N=7 0.133 0. The optimal arrays are plotted in Figure 42.135 Figure 41.
Figure 42.127 The optimal positions along with the sidelobe levels are listed in Table XXVII. The directivity of this array is evaluated numerically to be 17.72 dB. Optimal symmetric patch array locations for Case 2 (units of λ ).4. . The optimal weights are given in Table XXVIII.5 dB when compared to Case 1 of Section 7. The magnitude of the radiation pattern is plotted as a function of θ for distinct azimuth angles in Figure 43 for the N=7 array. The sidelobe level increased on average by 7.
0.27.161 0.05) (0. .11.6 10.145 Figure 43. 0.5 9.11) (0.60.58.228 N=5 0. Magnitude of T (θ ) at distinct azimuth angles (Case 2).02) (0. 0. 0.61) (1.61) (0. y 4 ) SLL (dB) N=4 N=5 N=6 N=7 (0. 1. 1.00.64.194 0. 0.178 0.272 0.01) 5.20. y 2 ) ( x4 .05.12) (0.00) (1. N=7 (patch).99) (0. 1. 0. y1 ) ( x2 .184 0.061 0.7 (1.77.128 TABLE XXVII OPTIMAL SLL AND POSITIONS FOR CASE 2 OF PATCH ELEMENTS (UNITS OF λ) ( x3 .159 N=7 0. 0.163 0. 0.51. 0.219 N=6 0.94) (0.7 7.163 0.00.01) TABLE XXVIII OPTIMAL WEIGHTS FOR CASE 2 WITH PATCH ELEMENTS w3 w1 w2 w4 N=4 0.00) (0. y 3 ) ( x1 .63.
7. arrays transmit and receive information over a range of frequencies. In this section. (k x . Wideband Weighting Method The previous efforts have been focused on optimizing an antenna array for minimizing sidelobes at a single frequency. k y . This interval will be written as c c [ f L .19) The weighting method used in Section 7. would add an intractable number of constraints. While this would be technically correct. k y .129 7. Hence. particularly for 2D arrays or for very wideband arrays. .18) where again Θ is the region in which the sidelobes are to be suppressed. to do so. the optimization would often be computationally intractable. However. fU ] = .3 could be extended such that every constraint is duplicated at every frequency. k x . λ ) ∈ Θ (7. In practice. and in the region given by θ ≥ θ c for all frequencies. λ L λU where f L < fU . . (7. The minimum sidelobe level in a wideband array is written as SLL = max  T (w. a more efficient method is developed. It will be assumed that the sidelobes are to be suppressed within a continuous frequency range. generally centered about some carrier frequency. this was the method proposed in [92]. a method of choosing weights that yield the minimum sidelobes over a range of frequencies is developed. d. λ )  . An array that has low sidelobes at a given frequency is not guaranteed to have low sidelobes at other frequencies within the band the array is operating in.
λ ) . k x . λi (7. This is not true for the element pattern.21) where the union is over all wavelengths within [ λU . λ ) = f (k x . λ ) AF (w. k y . T (w. (7.157. This can be seen from the patch element pattern of Section 7. d.23) Equation (7. For the wideband case.130 Defining 2π sin θ c Θ(λi ) = (k x . λ L ].23) indicates that the array factor does not depend on λ when k x and k y are specified. which cannot be written as only a function of two variables. d. the total radiation pattern is written as the product of the element pattern and the array factor. In this case. (7.7) as AF (w. f (k x .22) The twodimensional array factor is rewritten from (2. (7. which in general will not be independent of λ when k x and k y are specified.3 given in (7. k y ) : λ i 2π 2 ≤ k x2 + k y ≤ λ i 2 2 . λ ) . k y . Most antennas will exhibit a notable change in radiation pattern over the band of operation.17). k y . k y ) = n=1 ∑w e n N − j (k x xn + k y yn ) . the variation of the antenna pattern with frequency should be .21) with the wavevector definitions in (2. d. k x .20) it follows that Θ = U Θ( λi ) . k y . k x .
k y )  . (7. k y ) is defined as H (k x . k y ) is in the visible region. an auxiliary antenna pattern. it follows that  T (w.24).21). the auxiliary antenna pattern reduces to the antenna pattern at the frequency of interest. max λ ∈ [λU . k y ) = f (k x . The minimization is performed over the region specified in (7. as shown in Figure 44.25) minimizing the right hand side of (7. The wideband case is equivalent to minimizing the sidelobes in the (k x .131 taken into account.19). λ ) . k y ) plane beginning at the cutoff value for the lowest frequency (k cL ) . For instance.26) λL the suppression region for the wideband case can be illustrated graphically. k y )  AF (w. d. Note that the maximum in (7. the only frequency that has this value in the visible region is f = fU . Letting k cL = 2π sin θ c . k y ) over the frequency range of interest. k x .22) and (7. k y ) = (2π / λU . . Hence.24) This auxiliary antenna pattern is the maximum value of the antenna pattern evaluated at (k x . λ ) ≤ H (k x . To accomplish this. which is only a function of k x and k y . k y . λ L ] (7. and extending the region to the largest wavenumber in the visible region at the highest frequency (2π / λU ) . k y . d. For the narrowband or single frequency case. 0) .24) is taken only over the frequency range for which (k x . at the value (k x . H (k x . Using (7. the total radiation pattern as a function of frequency can be minimized by (7. k x .
132 Figure 44.28). 0.28) In (7. The solution to (7.22). . t.5) is rewritten for the wideband case in (7. d. 0) = 1  G (w. d. min t s. k y ) AF (w. 2. Writing H (k x . k y ) = G (w.27) the singlefrequency sidelobe minimizing optimization problem of (7. R (7. k x . k xi . i = 1. (7.22) yields weights that produce the minimum sidelobe level over the frequency band of interest. k y ) . k x . the R samples are sampled over the suppression region illustrated in Figure 44. D. K . k yi ) ≤ t . Suppression region for twodimensional arrays over a frequency band. G (w. D.
fc fc ( fU + f L ) / 2 (7. φ . the antennas are referred to as frequency independent. Fractional bandwidths are considered wideband when 0. λ j ) . the radiation pattern can be approximated. For this case.5. (7. 7. the arrays are optimized to determine the minimum sidelobe level over a range of frequencies. and are considered ultrawideband when FBW ≥ 0.31) When the antenna’s radiation pattern is independent of frequency over the bandwidth of interest.133 When the antenna elements do not have significantly different radiation patterns over the frequency band of interest. determining the wideband weights is as simple as extending the suppression region as in Figure 37 and using the procedure of Section 7. so that f (θ .2.29) for all λi . the optimization in this section will focus on the array factor. φ . FBW = f − fL fU − f L ∆f = U = . λi ) ≈ f (θ .30) where f c is the center frequency. In this section. Optimal Wideband Arrays of Omnidirectional Antennas In this section. The frequency range will be specified by the fractional bandwidth (FBW). an ultrawideband case is considered in which FBW=0. λ j within the frequency band. λ ) = 1 . The antenna elements are omnidirectional and independent of frequency over the frequency range of interest. Hence. f (θ . φ . (7.8.2<FBW<0.50. which is equivalent to the total radiation .5 [93].
Optimal symmetric array locations for FBW=0. The corresponding optimal weights are listed in Table XXX.5 (units of λc ). The optimization procedures that were applied in the previous sections of this chapter are again sufficient for the problem at hand.134 pattern for this case. For comparison with the results of Section 7. The optimal positions are also tabulated in Table XXIX. Figure 45. . the beamwidth will be 60 ° .4. Note that the results are now given in units of λc = c / f c . The resulting optimal arrays are found for N=47 and are presented in Figure 45. so that the sidelobes will be suppressed when θ ≥ 30 ° for all frequencies.
in extending the array from narrowband to ultrawideband.44) (0. .043 0. whereas it was a cross shape for the narrowband case.0. which is relatively large.183 0. The total radiation pattern for the optimal array of size N=7 is now presented.36. .44. The arrays are slightly less spread out as in the narrowband case.4. The SLL increased on average of 3.44.169 0.7 dB for N=7. 0.2 TABLE XXX OPTIMAL WEIGHTS FOR OMNIDIRECTIONAL ELEMENTS (FBW=0. 0) (. In addition. given in Figure 46).71. The result for N=6 is a circular array in the wideband case.36. the pattern will be plotted as a function of θ for distinct azimuth angles at the lower frequency ( f L .204 N=6 0. 0) (0. y 3 ) (0. y 4 ) N=4 N=5 N=6 N=7 (0.0 7. 0. the SLL increase was 6.33.189 0. This is not a large penalty in SLL for greatly extending the bandwidth. y1 ) ( x2 . 0. FBW=0. Since it is now a function of frequency.56) (0.0.56) (0.160 The results are interesting when compared with the narrowband results of Section 7.0) (0.9 6.51. 0.9 dB for N=6.204 0.135 TABLE XXIX OPTIMAL SLL AND POSITIONS FOR OMNIDIRECTIONAL ELEMENTS (UNITS OF λc .39) (0. However.311 N=5 0. 0.5) w3 w1 w2 w4 N=4 0.169 0.61) SLL (dB) 2. 0.44) (0.5) ( x1 .0) (0. y 2 ) ( x3 . the SLL increased by only 1.160 0.61) ( x4 .65. the center .169 N=7 0.33.4 3.6 dB when the array is designed to perform in this ultrawideband situation.160 0. 0. 0.0) (0.
as desired. given in Figure 47). given in Figure 48). Magnitude of T (θ ) at distinct azimuth angles (N=7) for f = f L .2 dB) in the suppression region.136 frequency ( f c . and the upper frequency ( fU . the beamwidth is less than 60 ° and the sidelobes never rise above the SLL (7. . for all frequencies in the range of interest. However. Note that the beamwidth varies depending on the frequency. Figure 46.
. Magnitude of T (θ ) at distinct azimuth angles (N=7) for f = f c .137 Figure 47.
The beamwidth will again be 60 ° for comparison with the results of Section 7.9. which is much smaller than the ultrawideband case of Section 7.11) will now be rewritten as a function of frequency in (7.2.34) is that E0 is approximately constant over the frequency range of interest.1) will be E (λ ) Eφ (λ ) f (θ . Optimal Wideband Arrays of Patch Antennas In this section. λ ) = θ E + E .7 but still wideband enough that the narrowband assumption is not valid.32) λ πW sin θ sin φ sin λ cos πL sin θ cos φ cos θ sin φ Eϕ ( λ ) = − E 0 πW sin θ sin φ λ (7. .327. but can be made to have a wider bandwidth using various methods including adding slits [94] or adding a Uslot to the patch [95]. wideband arrays of patch antennas are examined. The normalized field patterns for the patch. 7. given in (7.97.34) The implicit assumption in (7.327.33) λ The normalized pattern to be used for f (θ . φ . Patch antennas have radiation patterns that vary significantly with frequency.138 Figure 48. πW sin θ sin φ sin λ cos πL sin θ cos φ cos φ Eθ ( λ ) = E 0 πW sin θ sin φ λ (7. The bandwidth is selected to be FBW=0. 0 0 2 2 (7. φ ) as in (7. Magnitude of T (θ ) at distinct azimuth angles (N=7) for f = fU .33).5.
The arrays are similar to the narrowband case of patch elements with the same bandwidth given in Figure 33. Figure 49. The results are again given in units of λc = c / f c . The corresponding optimal weights are listed in Table XXXII. The optimal positions are also tabulated in Table XXXI.2 (units of λC ). Optimal symmetric patch array locations for FBW=0. The seven element array is again approximately . The resulting optimal arrays are found for N=47 and are presented in Figure 49.139 The PSO algorithm is again applied to determine the optimal positions.
128 Finally. The N=5 element array exhibited the lowest rise in sidelobes (only 1.180 N=5 0.76) (0. 0.50. when measured in units of the center wavelength. the center frequency ( f = f c ) radiation pattern is given in Figure 51.90) (0. 0.11.142 0. The radiation pattern at the lower frequency ( f = f L ) is plotted in Figure 50. which has been a recurring theme throughout this work. the total radiation patterns for the optimal N=7 arrays are again plotted at the lower. y 3 ) ( x1 .01.0) (. TABLE XXXI OPTIMAL SLL AND POSITIONS FOR PATCH ELMENTS (UNITS OF λc .268 0. The N=7 element array exhibited the highest rise in sidelobes (3.188 0.49. and the upper frequency ( f = fU ) radiation .45.169 0.00) (0.9 dB in order to guarantee the sidelobe level over the frequency range of operation. FBW=0. y 4 ) SLL (dB) N=4 N=5 N=6 N=7 (0. center and upper frequencies for fixed elevation angles. On average.10.320 0. y 2 ) ( x4 . 0. 0.99.5 TABLE XXXII OPTIMAL WEIGHTS FOR PATCH ELEMENTS (FBW=0.84) (0. The arrays for this case appear to be spread out further than the narrowband case.0) (0.91.215 0.0 14.7 18. 0.75) (0.100 N=7 0.0.49) (0.0. 0.2) w3 w1 w2 w4 N=4 0.56. 0.07) (0. 0. y1 ) ( x2 .30) (0. the SLL increased by 1.8 12. 0.1 dB) by extending the bandwidth of the array.140 hexagonal.53.0 dB) in order to extend the bandwidth. 0.87) (0.136 0.185 0. 0.197 N=6 0. 0.01) 10.2) ( x3 .
. the beamwidth again varies depending on the frequency. As seen in the ultrawideband case.141 pattern is plotted in Figure 52. Magnitude of T (θ ) at distinct azimuth angles (N=7) for f = f L . However. the variance is less pronounced in this case because of the lower fractional bandwidth considered. Figure 50.
Magnitude of T (θ ) at distinct azimuth angles (N=7) for f = f c .142 Figure 51. .
if an array is used over a frequency range. The arrays were then studied over a wide frequency range. Hence. The PSO method was again effective in optimizing the array geometry. 7. It was seen that the optimal geometry varies depending on the beamwidth and antenna elements used in the array. the narrowband optimization technique will not be optimal. Optimal weights were derived that minimize the sidelobe level over a range of frequencies. Conclusions The sidelobe minimization technique was extended from 1D to 2D in this chapter. This optimization technique has proven to work well for a wide variety of problems.143 Figure 52. In .10. The optimal geometries and weights were found to vary with the fractional bandwidth used by the array. Magnitude of T (θ ) at distinct azimuth angles (N=7) for f = fU .
. which can significantly speed up computation time.144 addition. the method lends itself well to be employed using parallel processing.
The relatively new field of convex optimization will likely be a tremendously effective tool as it makes its way into the antenna field. the performance can be optimized on average over the likely scenarios in which the array is to perform. These metrics include sidelobe level. It was shown that the interference allowed through the spatialfilter (that is the antenna array) can be lowered by varying the geometry. Because of the difficulty in analyzing an array’s geometry. This was a necessary development. this lowering of interference power was then shown to often translate into gains in overall SINR. this work has shown that gain in performance can be achieved via suitable optimization of the array geometry. the geometry is often chosen to be a standard geometry in practice. In this manner. Summary and Conclusions The primary goal of this dissertation has been to show the effect of an array’s geometry on metrics of interest. the concept of an interference environment was introduced.VIII. In Chapter 5. SUMMARY. The interferencesuppression capability of adaptive arrays was shown to vary with geometry. Because of the wide range of environments in which these arrays operate. The secondary goal has been to improve upon existing weightselection strategies via convex optimization. the optimization of the geometry was of interest. improving the performance of an adaptive array was considered.1. CONCLUSIONS. as optimizing over a specific situation would not have been extremely useful for an adaptive array. and consequently. However. In addition. a critical parameter in wireless communication. . AND FUTURE WORK 8. interferencerejection and SINR.
146 Sidelobe level was shown to be critical in WCDMA systems in [87]. The PSO algorithm was employed to determine optimal positions. In Chapter 7. In addition. The methods for minimizing sidelobes in linear arrays were extended to the planar case. Results were presented for linear arrays steered to broadside and 45 ° . In Chapter 6. The derivation of the weights presented in Chapter 6 was a significant expansion of the capabilities of that method. Sidelobe minimizing weights were derived that suppress the sidelobes over a range of frequencies. and for arrays with either omnidirectional or patch antenna elements. the optimal sidelobe level for 2D or planar arrays was considered. determined the optimal sidelobe levels for linear arrays of size N=27. The DolphChebyshev sidelobeminimizing weighting method was derived in 1946 [21] and has been used extensively since its publication. the process of determining the minimum possible SLL in a linear array was developed. and antenna type. Since the optimal weights can be found for any array geometry and any antenna type. In addition. a method of minimizing sidelobes in wideband planar arrays was developed. This weighting method is valid for . scan angle. instead of at a single frequency as is done in the narrowband case. weights and sidelobe level. Optimal symmetric planar arrays of size N=47 were found for two beamwidths. The total radiation pattern depends on both the weights. Sidelobeminimizing weights were derived that can be efficiently computed for any linear geometry. positions and elements in the array. the only variables remaining were the array positions. beamwidth. and for two different beamwidths. arrays of omnidirectional elements and short dipoles were examined to show the effect of the antenna’s radiation pattern on the optimal geometry. that along with the optimal weights.
147 arbitrary bandwidths, beamwidths, antenna types, and planar array geometries. The positions were optimized simultaneously with the weights to determine optimal sidelobe levels for wideband arrays. Results were presented for an ultrawideband case of omnidirectional elements, and for a wideband case of patch antenna elements. Throughout this work, the hexagonal array has been a recurring optimal twodimensional array. For interference suppression, the optimal 7element array was a closely spaced hexagonal array. For sidelobe suppression in the planar case, the hexagonal array arose as the solution for distinct element types and beamwidths. Hence, when using an array with a number of elements that fits well with the hexagonal structure, it is likely that this geometry would be a good starting point. As the traffic in wireless communication increases, every variable that can be exploited to improve performance will be optimized. Since the demand for higher data rates and reliability for a given bandwidth continues to grow exponentially, it is likely array geometry optimization will be employed in real systems. 8.2. Future Work There is no shortage of applications in which array geometry optimization would prove useful. The obvious next steps would be to continue the work of this dissertation for arrays with a larger number of elements. The minimum sidelobeproducing antenna arrays for one and two dimensions could be studied for increasing number of elements to determine the characteristics of the optimal arrays as the number of elements becomes large. The same extensions could be done to the interferencesuppressing adaptive arrays.
148 Another topic of interest would be to optimize the weights and geometries of antennas consisting of nonidentical elements. Antenna array analysis is almost exclusively performed with identical elements, and it would be interesting to observe if gains could be made by exploiting elements with different radiation patterns. Another interesting practical problem would be to minimize crosspolarization in antenna arrays while holding a certain criteria constant (SLL, MSE, etc.). This problem could likely be solved in a similar manner to the solution methods of Chapters 6 and 7. Implementing precise weights can sometimes be difficult in actual systems. Hence, deriving an optimization problem that returns weights from a discrete set of allowable weights would be advantageous. Then optimizing over the positions to determine an optimal geometry for the discretized weights could be performed. The geometryoptimization in this work has focused on translating the elements. For nonomnidirectional elements, the array could be optimized by allowing the elements to rotate or be put at an angle relative to the other elements. This would add new degrees of freedom to each element, which could translate to potentially large gains in performance. On the theoretical side, optimization methods for proving an array’s geometry is globally optimal would be of value. This has not been done due to the mathematical intractability of the problem (many locally optimal points). However, as the field of optimization expands, it is possible that a clever technique could be developed to verify that an array is globally optimal.
149 Finally, in digital communications, the bit error rate (BER) for a given data rate is the definitive measure of performance for a wireless communication system. Hence, more general modeling methods that ultimately minimize the BER would be valuable. However, because of the large and complex nature of wireless communication systems, this would not be an easy task.
vol. J. “Design of unequally spaced arrays for performance improvement. Haber.. Wireless Communications. June 1996. Nemhauser. Conf. 15221524. 2005. Alvarez. [10] M. 222223. Gavish and A.” IEEE Proc. vol. 1985.. [3] H. and J. pp... “Unequally spaced. 12. .REFERENCES [1] L. Unz. 1960. 1961. “Array geometry for ambiguity resolution in direction finding. 73. “Optimization of array geometry for identifiable high resolution parameter estimation in sensor array signal processing. 12. 380384. 9. Inf. 1964. “Linear arrays with arbitrarily distributed elements. pp. cylindrical or spherical geometry. “Statistically designed densitytapered arrays. Branner.” IEEE Trans. vol. 621633. 2006. Harrington. C. M. Mar. 1987. 1960. Coe. Oct.. and A. and F. 8. 8. Kot. [7] M.” IEEE Trans. Wireless Radio: A History. Jan. [5] R. Jul.. [9] S.. vol. [4] D. R. pp. New York: McFarland & Company. L. Commun. C. J. 511523. “A new approach to array geometry for improved spatial spectrum estimation. July 1972. Mar. Sep. Ang.. R. Stutzman. planar. [8] W. Int. W. R.” IEEE Trans. “Sidelobe Reduction by Nonuniform Element Spacing. pp. pp. W. 53. Branner. vol. vol.” IEEE Trans. New York: Basic Books.. P. Kumar and G. Feb. 1999. Kumar and G.” in Proc. Antennas Propag. [11] C. [13] B. broadband antenna arrays. 1997. [12] A.” IEEE Trans. pp.” IEEE Trans. Antennas and Propag. New York: Cambridge University Press. pp. Goldsmith. Pillai. pp. vol. pp. Mar.. 499501. Antennas Propag. July 1964.. Alvarez: Adventures of a Physicist. Ogg. Thomas. [14] B. Sherman and F. Singapore.. pp. pp. vol. Y. 44. Jr. Antennas and Propag. 2005.” IEEE Trans. 187192. See. G. “Generalized Analytical Technique for the synthesis of unequally spaced arrays with linear. Weiss. vol. Antennas and Propag. 47. Antennas and Propag. Antennas Propag. Signal Process. Skolnik. 408417. AP20. U.” IEEE Trans.. Skolnik.” IEEE Trans. Packard and R. King. Antennas Propag. BarNess. Antennas Propagat. “Shapedbeam synthesis of nonuniformly spaced linear arrays. vol. 3543. 16131617. Sherman. [6] M. “Dynamic programming applied to unequally spaced arrays. [2] L. P. 889895.
[24] J.” IEEE Trans. Conf. Digital Signal Processing. 5061. IEEE Ultrasonics Symp. L. Antenna Theory: Analysis and Design. Petrella. IRE. Neuvo. Balanis. V. pp. Feb. pp. al. 1995. Manolakis. Elgetun. [25] R. Aug. “Discussion of Dolph’s paper. “Array pattern nulling by element position perturbations using a genetic algorithm. 652659. France.” Proc. Acoust. “A current distribution for broadside arrays which optimizes the relationship between beamwidth and sidelobe level. pp. 3rd Ed. Christodoulou. 2006. G. A. pp. Dawoud. 10871090. Radar. M. 26742679. 1996. Sramaki. Proakis and D. 2006. vol. 30. Cannes. Lett. Antennas Propag.. “Optimization of the beampattern of 2D sparse arrays by weighting. vol.” Proc. Feb. 34. IRE. “Optimum patterns for arrays of nonisotropic sources. Dolph. “Sidelobe Level and Null Control Using Particle Swarm Optimization. [22] C. 39. Microwaves. Holm and B. 1991. and A. [18] N. 35. New Jersey: Prentice Hall. Sinclair and F. [23] H. 349352. 335348.” Trans. [20] O. New York: Wiley. M. 3. “Ant colony optimization in thinned array synthesis with minimum sidelobe level. IRE. Mitra. DuHamel.” IEEE Trans. [16] T. M. P. Antennas Propag. Nov.” Electron. May 1947. QuevedoTeruel and E. 53. Riblet.” Int. May.151 [15] P. K. J. vol. pp. 15611566. and Y. Cairns. 41. IRE. 36. pp. pp. Wireless Comm. [27] S. G. [19] A. 5.. Antennas Propag... Tennant.” Proc. Speech. “On the properties and design of nonuniformly spaced linear arrays. Letters. vol. vol. Mar. 489492. 372380. et. H.. T. pp.. . [17] M. 1994. 174176. pp. pp. 1952. vol. 2005. 2005. vol. [26] G. S. June 1946. H. PGAP1.” IEEE Trans. Anderson.. “Optimum patterns for endfire arrays. M.” IEEE Trans. “Planar array synthesis with minimum sidelobe level and null control using particle swarm optimization. Khodier and C.” Proc. Jarske. vol. G. [21] C. 3rd ed. Signal Processing. Ismail and M. no. “Null steering in phased arrays by controlling the element positions. 1988. Dawoud. RajoIglesias.
vol. [39] F. 2nd Ed. 44. 2001. ElSallabi and S. 10361039. 1996. Marhefka. [37] M. [33] F. “Uniqueness and linear independence of steering vectors in array space. Nov. 53. Nie. “Adaptive antenna systems. [35] W. “Uniqueness study of measurements obtainable with arrays of electromagnetic vector sensors. S. Abouda. A. vol. and L. Introduction to Radar Systems. pp. 2007. 2001.” IEEE Trans.” Ph. Skolnik. Signal Proc. IEEE. “Effect of antenna array geometry and ULA azimuthal orientation on MIMO channel properties in urban city street grid. D. 637641. [29] B. [30] P. Apr. Virginia Inst. vol. D. J.. pp. 2006. 368372. Blacksburg. Murino. 44. 1998. pp. Sklar. no. [32] X. vol. Dec. Acoust. Antenna Theory and Design. Thiele. PIER 64. 2001. “Space time processing for third generation CDMA systems. Antennas Propag. pp.” IEEE Antennas Propag. Griffiths. dissertation. E. 55. New York: McGrawHill. K. Signal Processing. vol. Bevelacqua and C. 1981. “Effect of array orientation on performance of MIMO wireless channels. Li and Z. Jan. Mar. Haggman. 2. [38] B. 2nd Ed. pp. Nehorai. A. Digital Communications: Fundamentals and Applications. Godara and A. Letters. 2nd Ed. 21432159. Antennas. Amer. pp. 2004. New Jersey: Prentice Hall. M. 1967. New Jersey: Prentice Hall. pp. vol. Mantey. New York: Wiley. [40] L. Advanced Engineering Electromagnetics. Balanis. Ulaby. 2002. “Optimizing antenna array geometry for interference suppression”. Fundamentals of Applied Electromagnetics.” IEEE Trans. [36] J. Regazzoni.” Progress in Electromagnetics Research. A. Aug. L. Soc. IEEE Trans. G. 1989. 257278. 3.” J. 119123. J. 70. Ho and A. Cantoni. [31] A.” Proc. Tan. C. . 2001.. New York: McGrawHill. A.. 467475.152 [28] V. Technol. New York: Wiley. “Synthesis of unequally spaced arrays by simulated annealing. P. Stutzman and G. Balanis. Kraus and R. T. [34] C.. [41] K. H. Alam. 1996. Trucco and C. A. Widrow.
Maxwell. [55] R. 1 and 2. pp. 1873. Int. Soc. Applebaum. 302307. [46] B. Computational Methods for Electromagnetics. New York: WileyIEEE Press. Balakrishnan and R. pp. Vol. New York: Wiley. Antennas Propag. 3. 585598. Widrow and M. “A mathematical theory of linear arrays. [49] O. 2007. New York: Wiley. [48] S. A Treatise on Electricity and Magnetism. [43] A. L. Antennas Propag. [54] A. Macchi. “Antenna Design with a mixed integer genetic algorithm. 22. D. Mar. “Easy generation of DolphChebyshev excitation coefficients.. vol. [53] R. 577582. [50] J. “Adaptive Arrays. [51] A. “Evolutionary programming in electromagnetic optimization: a review. Harrington. Haykin. Mar. 1976. IRE WESCON Conf. 1996. [52] K. 1995. Symposium. 96104. 1943.” Bell System Technical Journal. 14. Part IV). A. 2007. “Numerical solution of initial boundary value problems involving Maxwell’s equations in isotropic media. AP28. Sep.” IEEE Trans. [47] S. 80107.. vol. pp.” IEEE Trans. 1993. Hoff. 24. vol. F. E.. Antennas Propag. [45] H. vol. Schelkunoff. [44] N. vol. F. IEEE. “Adaptive switching circuits. 523537. Rec. 15081509. Mittra. L. Optimum Array Processing (Detection.” Proc. Nov. 69. Bresler... Haupt. S. 1980.” IEEE Trans. New Jersey: Prentice Hall. Adaptive Processing: The LMS Approach with Applications in Transmission. .153 [42] S. Antennas Propag. “A new algorithm for calculating the current distributions of DolphChebyshev arrays. S. pp. Adaptive Filter Theory. Hoorfar. Yee. Field Computation by Moment Methods. pp. pp. 55. Oxford: Clarendon Press. Sethuraman. Antennas Propag. Ray and R. no.. 1966.” IEEE Trans. Peterson. 1997. vol. pp. C. 55. 951952. part 4. Jr. 1981. Van Trees. 1960. L. New York: WileyIEEE Press. Estimation and Modulation Theory. Nov.” IEEE Trans. 2002. pp.” Proc. vol.
L. [62] H. Schriver. “A new polynomialtime algorithm for linear programming. G. D. S. 24. 1987. Aarts.” IEEE Trans. Theory of Linear and Integer Programming. vol. [67] Y. 220. 16061614. Johnson and Y. vol. Cambridge. C. vol. “Convex optimization algorithms for active balancing of humanoid robots. PA: SIAM. pp. 1994. RahmatSamii. “An introduction to convex optimization for communications and signal processing.” IEEE J.. RahmatSamii. [58] A. van Laarhoven and E. New York: Springer. 2005. Linear Programming 1: Introduction. pp. 47. 2004.. Theory. 682691. “Genetic algorithms and method of moments (GA/MOM) for the design of integrated antennas. Jr.” IEEE Trans.” Combinatorica. pp.” IEEE Trans. May 1983. 817822. [61] N. U. . pp. Vecchi. Q. 4. [66] S. pp. Image Process. P. 16. Sel. 1984.” IEEE Trans. vol. [69] S. Infor. 2006. Yu. Thapa. New York: McGrawHill. 1998. Dordrecht. Boyd and L.: Cambridge University Press. Nov. June 2007. and M.. Linear and Nonlinear Programming. K. [60] S. Dantzig and M. Gelatt. B. Science. 1996. “Optimization by Simulated Annealing”. Aug. K. Kirkpatrick. vol. Luo and W. InteriorPoint Polynomial Algorithms in Convex Programming. [63] Z. Sofer. vol. Simulated Annealing: Theory and Applications. J. [68] P. New York: Wiley. Oct. “Parallel particle swarm optimization and finitedifference timedomain (PSO/FDTD) algorithm for multiband and wideband patch antenna designs. Ni and T. [65] T. Nash and A. vol. 53. Nguyen. pp. pp. 373395. “Sequential greedy approximation for certain convex optimization problems. Nesterov and A. [64] K. Nemirovskii. Zhang. Reidel. Robotics. 15961610.” IEEE Trans. 671680. Holland: D. Q. 23. Areas Commun. Karmarkar.. [57] J. 14261438. Philadelphia. M. 2007. Antennas Propag. Jin and Y. Antennas Propag.154 [56] N. 1999. M. pp. Convex Optimization. 34593468. vol. Park. “Image superresolution using support vector regression. Aug. 49. 2003. N. Vandenberghe. [59] G. H. Mar. 1997. Park and F.
N. pp. Mar. vol. [73] J. vol. U. [74] R. genetic algorithm. Dudgeon and R. 2. and M. New York: Dover. 2001. [77] G. Texas. 1962.. 1977. AP10.. Bowman. and their hybrids: optimization of a profiled corrugated horn antenna. 3. E. Robinson. 2.” IEEE International Symposium on Antennas & Propagation. N. K. “Eigenstructure techniques for 2D angle estimation with uniform circular arrays.: PrenticeHall. London. Hajeck. 2001 Congr. Guyse. vol. 2004. [71] J. 1984. Zhang. Eberhart. 2002. Mersereau. Antennas Propag. RahmatSamii.” Raytheon Syst. June. 2005. 23952407. RahmatSamii. “Particle swarm optimization: developments.. S. 2005. D. [81] G. Y. pp. “Particle swarm optimization in electromagnetics. 42. Feb. C. Dover. Quadratic Programming and Affine Variational Inequalities: A Qualitative Study. S. Shi. Shilov.” IEEE Trans. no. Signal Process. A.” IEEE Trans. 52. C. [82] J. Robinson. 1995. NJ. vol. Linear Algebra. Evolutionary Computation. Y. [83] M. Chen. . “Particle swarm. “Particle swarm optimization. [79] C.” IEEE Trans. P.” in Proc. 13. Sinton.” Math. M. Andreasan. pp. IEEE Conf. Chen. Zoltowski. Lee. D. Eberhart and Y. and Y. [80] D. Mathews and M. 2005. Multidimensional Digital Signal Processing. E. D. “Practice safer GPS navigation – get protection. applications and resources. W. 1994. C. Tam and N. 1958. no. 311329. Kennedy and R. Limited Internal Rep. [72] T.” AsiaPacific Microwave Conference Proceedings. 137143. B. Springer. Res. B. Robinson and Y. pp. 1. “Linear arrays with variable interelement spacing. [76] R. [75] S. Statistical Signal Processing. Morrison. Oper. 1988. “Cooling schedules for optimal annealing. Introduction to Bessel Functions. G. 2004. Yen. New York. Clark.” in Proc. Piscataway... M. M. Jiao and F. C. M. “Synthesis of antenna array using particle swarm optimization. [78] F. San Antonio. Neural Networks IV. 397407. New York: Cambridge University Press. vol. vol. Antennas Propagat. Gray and L.155 [70] B.
“Effect of mutual coupling on the performance of adaptive arrays. 50.” IEEE Trans. May 1998. pp. WCDMA: Requirements and Practical Design. 954960. Lee. 1992. Boyd. pp. 2005. “Performance improvement in array processing architectures of WCDMA systems by low side lobe beamforming. Ksienski. Antennas Propag. K. [89] K. Signal Processing. Mink. vol. 13451347. [92] H. Ward. J. Lee and R.” IEEE Trans.” IEEE Trans. vol. K. “Wideband beamforming using fan filter. [90] M. Antennas Propag. pp. pp. “A broadband rectangular patch antenna with a pair of wide slits. Khanna and R. Conf. Nov. 46. vol. 2000. vol. Balanis. “Microstrip antenna technology. [85] Z. “Broadband DOA estimation using frequency invariant beamforming. 21432151. New York: Wiley. 1981. pp. Z. [95] K. 45. vol. Kennedy. Kohno. [88] M.. Antennas Propag. . 2006. pp. 324328. Antennas Propag. pp. 533536.” IEEE Trans. “Recursive fan filters for broadband partially adaptive antenna. vol. 48.. R. Mar. Ghavami and R. Ding and R.. Tong. Ghavami.” IEEE Trans. 54. Sep.” IEEE Int. Nishikawa et al. 785791. vol. Commun.. 1997. 2002. 185188. Jan. Carver and J. Hsu.. AP31... Signal Process. pp.” IEEE Trans. vol.” IEEE Trans. 49. 526532. W. 48. 2001. “A broadband Uslot rectangular patch antenna on a microwave substrate.” IEEE Trans. A. Antennas Propag. Gupta and A.. 30823086. Saxena. “Wideband Smart Antenna Theory Using Rectangular Array Structures. K. 14631469. May 1983. Q. [94] K. Personal Wireless Comm. pp. 224. B. AP22. Luk. June 2000. Feb. 2004. [91] K. Lebret and S. pp.156 [84] I. [86] R. [93] D. Signal Processing. Jan..” IEEE Trans. pp. vol. Woodard.” in Proc. “Antenna array pattern synthesis via convex optimization. “Mutual coupling compensation in UCAs: simulations and experiement. ISCAS. Tanner and J. Wong and W. Sep. [87] R. Huang and C.
This action might not be possible to undo. Are you sure you want to continue?