You are on page 1of 6

Neural Network Based Antenna Analysis Using

Genetically Optimized Bézier Parameterization

Athar Kharal Irfan Shahid Muzammil Bashir∗


Humanities and Sciences Department, Avionics Engineering Department, Avionics Engineering Department,
NUST-CAE, Risalpur NUST-CAE, Risalpur NUST-CAE, Risalpur
Email: atharkharal@gmail.com Email: irfan61cae@hotmail.com Email: muzammil360@gmail.com
Phone: (+92) 336 610 8980
TABLE I. P ROPERTIES OF ROGERS RO 3210 ( TM )
Abstract—Antenna design is a time consuming and tedious
task which needs to be speeded up. This work discusses time μr 10.2
efficient determination of antenna radiation pattern and return loss tangent 0.03
loss based on Artificial Neural Network. Return loss and radiation thickness 20 mil
pattern data has been parameterized using Bézier parameteriza-
tion and then optimized using genetic algorithm (GA) for better
results. Finally, Feed-Forward Back-Propagation (FFBP) neural II. M AIN W ORK
network has been trained based on normalized optimized Bézier
parameters. Results show that neural network based antenna In this paper, a neural network based approach is presented
models are well suited for applications that require time efficient to speed up the antenna design process. Microstrip rectangular
determination of antenna properties. patch antenna has been selected for this purpose. Neural
Networks have already been used in antenna design process[2],
Keywords—Antenna design, Neural Network, Genetic Algo- [3], [4]. This work presents a novel approach of determining
rithm, Bezier curves, Radiation pattern parameterization, Return antenna properties using optimized Bézier parameterization
loss parameterization
with Artificial Neural Network.

I. I NTRODUCTION A. Data Collection


Antennae are integral part of every wireless communication Data collection is the primary and foremost step in the
system. They are responsible for conversion of electrical neural network design process. In this work, High Frequency
signals to EM waves (and vice versa) to be transmitted Structural Simulator (HFSS) has been taken as the reference
(or received). Antennae are of considerable importance and source for the data. All the simulations have been carried
for the overall performance of device using transmission out for 10 GHz frequency. Radiation Pattern and return loss
and reception. This places a huge focus on antenna design have been selected as primary performance parameters. Rogers
which is a time taking and cumbersome process. In order R03210 (tm) has been selected as the substrate whose details
to get the optimum performance, a large number of antenna are given in Table I.
geometries are simulated using EM simulation tools. These
EM simulators can predict antenna properties with high Microstrip antenna has six primary dimensional parame-
degree of accuracy but are extremely time consuming and ters:
usually require huge simulation times.
• Length
In order to cut on time, a number of different methods • Width
have been reported in the recent past. These methods include
• Inset Distance
computation of radiation pattern based on FFT[1] and applica-
tion of Artificial Neural Network (ANN) in antenna design[2], • Inset Gap
[3], [4]. Naryana[2] has employed Multi Layer Perceptron
(MLP) and Radial Basis Function network (RBF) whereas this • Feed Length
work employs FeedForward BackPropagation (FFBP) neural • Feed Width
net. Superiority of FFBP neural net over MLP for function
approximation has been established in the text. Kushwah and Out of these, five have been selected. Feed length has
Tomar[3] have only considered width and length of patch been dropped because it did not show much variation of
whereas this work considers other dimensions like feed width, radiation pattern and return loss. Some initial simulations
inset distance and inset gap as well. Further, they have only were carried out by changing one dimension and keeping the
determined the resonant frequency whereas this work deals other dimensions constant. Using those simulation results,
with the complete return loss curve. Additionally, complete following ranges along with their sampling was determined.
radiation pattern has also been produced in this work.
All these sweeps combine to make total data population
∗ Corresponding author of 675 (5*5*3*3*3). This complete population is simulated
ª*&&&
3URFHHGLQJVRIWK,QWHUQDWLRQDO%KXUEDQ&RQIHUHQFHRQ$SSOLHG6FLHQFHV 7HFKQRORJ\ ,%&$67 
,VODPDEDG3DNLVWDQWK±WK-DQXDU\
TABLE II. R ANGE OF DIMENSIONAL PARAMETERS
Dimension Range samples
Length 0.4-0.5 5
Width 0.63-0.73 5
Feed Width 0.04-0.06 3
Inset Distance 0.167-0.19 3
Inset Gap 0.02-0.03 3

and radiation pattern and return loss results are exported to


MATLAB. Result plot ranges with sampling points have been
shown in Table III.

TABLE III. S IMULATED RESULT PLOT RANGES


Range Length Resolution
Radiation Pattern -180°–180° 181 2°
Return Loss 8–12 GHz 401 10 MHz

Figure 1 and Figure 2 show a sample of simulated results


for radiation pattern and return loss respectively.
Fig. 2. Sample Curve (Return Loss)

Finally at t=1, Bézier curve ends at (x3 , y3 ).

In this work, each sample of radiation pattern and return


loss has been divided into multiple segments and cubic Bézier
parameterization for each segment has been carried out to
adequately reproduce the curve. Return loss has been divided
into four segments (two before minima and two afterwards)
whereas radiation pattern has been divided into three segments.

Figure 3 shows the Bézier curve along with simulated curve


for both radiation pattern and return loss. From the figure, it is
evident that near the sharp bends, Bézier curve is not able to
retrace the original simulated curve in an appropriate manner.
This emphasizes the optimization of Bézier control points of
this segment.

Fig. 1. Sample Curve (Radiation Pattern)

B. Bézier Parameterization
In order to increase the prediction accuracy of neural net-
work, data needs to be parameterized. It involves representing
the data with fewer numbers. Cubic Bézier parameterization
has been employed in this case. It involves four control points
which define the whole curve. Control points can then be
employed in the following equation to reproduce the curve. Fig. 3. Comparison of Bézier and simulated curves. Bézier control points
are unable to retrace the original curve appropriately near sharp bends.

x(t) = x0 (1 − t)3 + 3x1 t(1 − t)2 + 3x2 t2 (1 − t) + x3 t3 (1) C. Optimization


3 2 2 3
y(t) = y0 (1 − t) + 3y1 t(1 − t) + 3y2 t (1 − t) + y3 t (2) As discussed in the previous section, Bézier control
points corresponding to 2nd and 3rd section of return loss
where parameterization are unable to do their job in an adequate
(xi , yi ) : ith Bézier control point manner. This raises the bell for optimization.
t = [0, 1]
At t=0, Bézier curve start from (x0 , y0 ); then as t increases, GA is one of the most widely used optimization algorithm.
it goes towards (x1 , y1 ) but starts to deviate towards (x2 , y2 ). It is stochastic in nature. Unlike other gradient based methods,

3URFHHGLQJVRIWK,QWHUQDWLRQDO%KXUEDQ&RQIHUHQFHRQ$SSOLHG6FLHQFHV 7HFKQRORJ\ ,%&$67  


,VODPDEDG3DNLVWDQWK±WK-DQXDU\
Fig. 5. Basic architecture of neuron. Output of neuron is the evaluation of
squashing function at weighted sum of all inputs.

• Min Max normalization


Fig. 4. Bazier curve after optimization. After optimization, Bazier curve • Std. deviation normalization
tightly follows the simulated curve.
In Min Max normalization, data is mapped from -1 to 1 or
from 0 to 1 whereas in standard deviation normalization, data
stochastic methods have the ability to avoid local minima and is mapped to number of standard deviations from the mean.
aim for the global minima. Due to its proven ability to solve
complex optimization problems, GA has been employed in
this work. This work has utilized standard deviation normalization.
This normalization has three basic steps:
Genetic Algorithm starts with a number of initial solutions a. Determination of mean (μ)
which is known as population. In each iteration, some b. Determination of standard deviation (σ)
individual are selected based on their conformance to c. Determination of no. of σ from μ
objective function. Selected individual are then subjected to
the operations of crossing-over and mutation. In each iteration, Neural network is trained on standardized control points.
the best individual (solution) is preserved and passed on to Un-standardization of estimated control points is achieved
next generation to ensure the good characteristics in new using the following equation:
population. As this process continues, individual solutions X =μ+σ∗x
start to converge for an optimal solution. Entire process is
based on random number generation and does not involve where x is standardized and X is un-standardized number.
any deterministic computation for next generation. This gives
the genetic algorithm the ability to search the solution space E. Artificial Neural Network
without the fear of getting trapped in local minima. This also Artificial neural network is a mathematical model of human
makes them suitable for solving problems where objective brain. Its basis unit is known as neuron. A neuron has a number
function is discontinuous and non-differentiable[5]. of inputs and single output. It produces the weighted sum of all
inputs and evaluates a function known as squashing function
This work required the optimization of 5th, 6th, 8th and at that weighted sum. Output of that squashing function is the
9th control point out of 13. Solution for each sample has output of neuron[7]. Figure 5 shows the basic architecture of
been constrained to ±20 (along both dimensions) of the neuron.
original point location. After optimization, difference between
simulated curve and Bézier is reduced appreciably. When a number of such neurons are stacked over one
another, they form a layer. One or more layers can be placed
Figure 4 shows the Bézier curve with and without opti- one after the other to make a simple neural network.
mization. After optimization, Bézier curve retrace the original
curve so good so that the original simulated curve disappears In this work, FeedForward BackPropagation (FFBF) arti-
behind optimized one. ficial neural network has been employed. Input to the neural
network is the antenna dimension column vector whereas the
target is normalized optimized Bézier control point vector.
D. Normalization Two separate neural networks are trained for return loss and
radiation pattern. Their scheme has been shown in Figure 6
Data normalization is a standard preprocessing process
and Figure 7.
which is employed before the training of neural network. It
improves the neural network training and regression statics[6]. Before the neural network is trained, data is divided into
Two types of normalizations are generally considered: three sets:

3URFHHGLQJVRIWK,QWHUQDWLRQDO%KXUEDQ&RQIHUHQFHRQ$SSOLHG6FLHQFHV 7HFKQRORJ\ ,%&$67  


,VODPDEDG3DNLVWDQWK±WK-DQXDU\
Fig. 6. Scheme of return loss ANN

Fig. 8. Regression results for return loss neural network

Fig. 7. Scheme of radiation pattern ANN

1) Training set (65%)


2) Validation set (15%)
3) Test set (20%)
Data has been divided as per the percentages given above.
Some of the samples in the data are pruned because they were
either not suitable for parameterization or had huge deviations
from the mean (μ).

III. R ESULTS
Figure 8 shows the regression results for return loss neural
network. Regression values of 0.91, 0.86 and 0.89 are achieved
for training, validation and test data set. 10 neurons have been
used in hidden layer. Table IV shows the training results for Fig. 9. Performance results for return loss neural network
neural networks trained with different number of neurons in
the hidden layer.
network with 20 neurons shows the best performance and its
TABLE IV. R ESULTS FOR RETURN LOSS NEURAL NETWORKS regression results are also good enough.
No. Neurons Training Validation Test Perf.
1 10 0.91 0.86 0.89 0.18 TABLE V. R ESULTS FOR RADIATION PATTERN NEURAL NETWORKS
2 15 0.90 0.90 0.85 0.21
3 20 0.91 0.90 0.87 0.19 No. Neurons Training Validation Test Perf.
1 10 0.91 0.86 0.89 0.21
2 15 0.90 0.89 0.85 0.17
Neural network with 10 neurons is selected because its 3 20 0.93 0.94 0.93 0.10
performance error was by chance coming out to be minimum
and its regression statics are also almost as good as with 15
and 20 neurons. Increasing the number of neurons further did Figure 11 shows the performance results for the radiation
not increase regression statics or performance any further. pattern neural network. Mean square error decreases as the
training continues and gives the best results at 10th epoch.
Figure 9 shows the performance results for return loss
neural network. It is evident that best performance is achieved Finally, Figure 12 shows the neural network estimated
at 26 epoch. return loss again the simulated return loss. It can be seen that
the estimated curves lies almost on the simulated curve.
Figure 10 shows the regression results for radiation pattern
neural network. Results show very good regression values
which indicate good training. Table V shows the training statics Figure 13 shows the comparison of neural network es-
for radiation pattern neural network. It can be seen that neural timated radiation pattern with the simulated pattern. Again,

3URFHHGLQJVRIWK,QWHUQDWLRQDO%KXUEDQ&RQIHUHQFHRQ$SSOLHG6FLHQFHV 7HFKQRORJ\ ,%&$67  


,VODPDEDG3DNLVWDQWK±WK-DQXDU\
Fig. 10. Regression results for radiation pattern neural network Fig. 12. Neural network estimated vs. simulated return loss

Fig. 11. Performance results for radiation pattern neural network Fig. 13. Neural network estimated vs. simulated radiation pattern

neural based results show close relation with simulated re- tage can be achieved. Once the network has been designed, it
sults. These two results have been shown for the following can also be used as a mathematical model for the optimization
dimensional parameters: purpose in antenna design process.
TABLE VI. T EST DIMENSIONS
R EFERENCES
Dimension Length (cm)
Length 0.705 [1] Nezhad, A. Z., Mirmohammad-Sadeghi, H., & Firouzeh, Z. H. (2010). A
Width 0.45 Fast Method to Compute Radiation Fields of Shaped Reflector Antennas
Feed Length 0.478 by FFT. INTECH Open Access Publisher.
Feed Width 0.18 [2] Narayana, J. L., & Reddy, L. P. (2007). Design of Microstrip Antennas
Inset Distance 0.02
Using Artificial Neural Networks. Conference on Computational Intelli-
Inset Gap 0.04
gence and Multimedia Applications, 2007. (Vol. 1, pp. 332-334). IEEE
[3] Kushwah, V. S., & Tomar, G. S. (2009). Design of microstrip patch
antennas using neural network. Third Asian International Conference on
IV. C ONCLUSION Modelling & Simulation, (pp. 720-724). IEEE.
Results shown in the previous section show that neural [4] Turker, N., Gune, F., & Yildirim, T. (2006). Artificial neural networks
applied to the design of microstrip antennas. Mikrotalasna revija, 12(1),
network estimated results are in good accordance with the 10-14.
simulated results. Bézier parameterization and their subsequent
[5] Rahmat-Samii, Y., & Michielssen, E. (1999). Electromagnetic optimiza-
optimization contributed heavily for the better estimation of tion by genetic algorithms. John Wiley & Sons, Inc.
results. Although, initially huge amount of time is invested in [6] Sola, J., & Sevilla, J. (1997). Importance of input data normalization
data collection and optimization stages but once the network for the application of neural networks to complex industrial problems.
has been trained, considerable time and computational advan- Nuclear Science, IEEE Transactions on, 44(3), 1464-1468.

3URFHHGLQJVRIWK,QWHUQDWLRQDO%KXUEDQ&RQIHUHQFHRQ$SSOLHG6FLHQFHV 7HFKQRORJ\ ,%&$67  


,VODPDEDG3DNLVWDQWK±WK-DQXDU\
[7] Kharal, A., & Saleem, A. (2012). Neural networks based airfoil genera-
tion for a given Cp using Bézier PARSEC parameterization. Aerospace
Science and Technology, 23(1), 330-344.

3URFHHGLQJVRIWK,QWHUQDWLRQDO%KXUEDQ&RQIHUHQFHRQ$SSOLHG6FLHQFHV 7HFKQRORJ\ ,%&$67  


,VODPDEDG3DNLVWDQWK±WK-DQXDU\

You might also like