Professional Documents
Culture Documents
1, APRIL 2007 23
Neural Networks
for Synthesis and Optimization of Antenna Arrays
Lotfi MERAD, Fethi Tarik BENDIMERAD, Sidi Mohamed MERIAH, Sidi Ahmed DJENNAS
Abstract. This paper describes a usual application of can be used to built up a useful database for a general ap-
back-propagation neural networks for synthesis and opti- proach to the synthesis of a given antennas arrays.
mization of antenna array. The neural network is able to
We propose here a new approach of stochastic syn-
model and to optimize the antennas arrays, by acting on
thesis based on neural networks. This approach is able to
radioelectric or geometric parameters and by taking into
model and optimize the antenna arrays system, by acting
account predetermined general criteria. The neural net-
upon various parameters of the array and taking into ac-
work allows not only establishing important analytical
count predetermined general criteria. The introduction of
equations for the optimization step, but also a great flexi-
this new variant of neural networks represents also an
bility between the system parameters in input and output.
interesting alternative for the printed antenna arrays syn-
This step of optimization becomes then possible due to the
thesis.
explicit relation given by the neural network. According to
different formulations of the synthesis problem such as At the learning step the neural network allows to es-
acting on the feed law (amplitude and/or phase) and/or tablish important analytical relations for the modeling and
space position of the radiating sources, results on antennas the optimization step of the antenna arrays. There is no
arrays synthesis and optimization by neural networks are restriction on the number of system parameters in input and
presented and discussed. However ANN is able to generate output. The interest of such system is in the extreme flexi-
very fast the results of synthesis comparing to other ap- bility introduced between the radioelectric characteristics
proaches. of the antenna arrays. This approach has been carried out
only with equally spaced linear arrays [2], [3]. However, it
is well known that antenna performance related to beam-
width and sidelobe levels can be improved by choosing the
Keywords best position or distribution and best excitation coefficients
for each element of unequally spaced arrays. Zooghby et al
Neural networks, modeling, optimization, synthesis,
[4] describes the uses of neural networks approach to the
antennas arrays, printed antenna.
problem of finding the weight of one and two-dimensional
adaptive arrays. An extension to multibeam arrays synthe-
sis using back-propagation neural networks was given in
1. Introduction [5].
In this paper, we present results concerned with the
In the domain of printed antenna arrays, the synthesis
synthesis and optimization of equally or unequally printed
problem consists of estimating the variations of the ampli-
antenna linear arrays by neural networks. Thus for the
tude and phase feed law and the space distribution of the
learning step, and in the first application we used an ana-
aerial elements. This provides a directivity diagram as
lytical method based on feed laws distribution [6] and in
close as possible to a desired diagram specified by a mask
the second application, we used a stochastic method of
[1]. The aim of this optimization is thus to seek for the
optimization based on the genetic algorithm [7–9]. The
optimal combination of these different parameters so that
simulation result shows the effectiveness of the proposed
the array complies with the requirements of the user and
method of synthesis.
according to precise specifications.
In this domain, many deterministic processing tools
for synthesis were developed. Taking into account the
diversity of the aims sought after by the users, a general
2. Artificial Neural Networks
method of synthesis appropriate to all cases is not found, The concept of artificial formal neurons is introduced
but it is rather found a significant number of methods spe- in this paragraph. The architectures of neural networks
cific to each class of problem. This diversity of solutions related to this concept are presented in order to simultane-
24 L. MERAD, F. T. BENDIMERAD, S. M. MERIAH, S. A. DJENNAS, NEURAL NETWORKS FOR SYNTHESIS AND OPTIMIZATION …
ously highlight their main applications and their multiple tained responses, we will be able to appreciate the quality
functions possibilities. of the considered network.
The artificial neural networks (ANN) are data-proc- In this paper, two distinct architectures are con-
essing models inspired from the structure and behavior of sidered; the multi-layers back-propagation network and the
the biological neurons. They are composed of inter-con- RBF network (Radial-Basis Function). They are the most
nected units which we call formal or artificial neurons [10], current architectures and the simplest non-linear networks.
[11]. The abilities of modeling these networks are analyzed.
An artificial neuron is an automaton which communi-
cates with its neighbors by weights and is able to activate 2.3 Multi-Layer Back-Propagation Network
itself according to the received signals (Fig. 1). Thus, all
neurons take their decisions simultaneously by taking into The multi-layers networks consist of an input layer
consideration the evolution of the global state of the net- whose neurons code the information presented at the net-
work. These neurons are connected between them and are work, a variable number of internal layers called "hidden"
set on layers to form a network. To a given inputs’ con- and an output layer (Fig. 2) containing as many neurons as
figuration, the network associates outputs’ configuration, the desired responses. The neurons of the same layer are
compared to that which is wished. The synaptic weights not connected between them. The learning process of these
(characteristically elements of the neurons) are modified in networks is supervised. The used algorithm for this learn-
the network so that, to minimize the obtained error. ing process is known as the Back-Propagation Learning
algorithm (BPL). It includes two steps:
e1 • a propagation step, which consists of presenting a in-
Wi1 put configuration to the network then propagating this
Wi2 configuration gradually from the input layer through
e2 Σ g Si = g (Ai) the hidden layers up to the output layer,
Wi3 • a back-propagation step (Fig.3), which consists, after
the process of propagation, in minimizing the ob-
e3 Win
en tained error upon the whole presented examples,. This
is considered as a function of the synaptic weights
(W). This error represents the squared sum of dis-
Fig. 1. Scalar product neuron.
tances between the determined responses (S) and the
Si = g ( Ai ) (1) desired ones (Y) for all examples contained in the
whole learning process. This process continues in or-
n
der to recalculate the synaptic weight of the network
with Ai = ∑W
k =1
ik ek until the number of epochs is reached or the error is
less than the desired goal.
where the coefficient Wik is called the synaptic weight of
the k towards i connection and ei input parameters. Gener-
ally, the scalar product neuron consists of two successive
modules: a linear transformation (the scalar product) fol-
lowed by a nonlinear transformation g. For the synaptic
weights calculation, we proceed with very important step:
the learning step.
The neural networks can change their behavior to Fig. 2. Multi-layers networks.
adapt themselves to their environment (i.e. the problem), it
is what we call the learning. By presenting an ensemble of
inputs, the network is self-adjusted by modifying its Y
weighting parameters to produce the desired results.
X S – +
W
2.2 Use Step
In this step, we test the performances of the network,
because this later will be confronted to situations which are
Fig. 3. Back-propagation learning (BPL).
close to the selected examples. With reference to the ob-
RADIOENGINEERING, VOL. 16, NO. 1, APRIL 2007 25
i1 o1
Validation of the
between the network output and the desired output is modi-
specifications
ANN synthesis antenna array specifications by plotting fied by the synaptic weights of connections until the net-
in
model parameters the radiation diagram work produces an optimal desired output. The chosen
on learning process in our applications is of supervised type
[18]. The hyperbolic tangent function is affected as an
Fig. 6. Synoptic bloc diagram of the RNA model test step. activation function to the hidden layer and the linear func-
tion to the output layer.
With the Tchebycheff method, we normally obtain a where k ( θ )=( Gmax ( θ )− Fs ( θ ) )( Gmin ( θ )− Fs ( θ ) ) (6)
constant side-lobes level, but we notice in Fig. 7 that it is
not the case. This is due to the fact that we have taken into Gmin and Gmax represent respectively the masks of minimum
account the elementary diagram f(θ) which causes the re- and maximum beamwidth.
duction of the extremes side-lobes.
4.3.1 Learning Step
4.3 Synthesis from a Desired Mask After several tests, we chose the RBF network, with
the following topology (Fig.9).
When the desired directivity diagram Fd(θ) is speci-
fied by a mask, the synthesized diagram must remain In our application, the number of training set is equal
within the limits fixed by this mask. to 75 and the range of input parameters variations are as
follows: –42 dB ≤ SLL ≤ –12 dB, –4.5 dB ≤ UD ≤ –3.5 dB,
Let us characterize the desired diagram from the half-
mask represented by Fig. 8. 13° ≤ θ 0 ≤ 17°, 0.45 λ ≤ dx ≤ 0.6 λ, with Gmin < 20°, dx
being the inter-elements distance.
The number of training set is equal to 75, and the input pa- the amplitude law A=[a1, a2, ..,aN], the phase law ψ=[ψ1,
rameters’ variation range is as follows: 12° ≤ Gmin ≤ 20°; ψ2, ...., ψN] and space distribution law X=[ΔX1, ΔX 2, ....,
40° ≤ Gmax ≤ 50°, –5 dB ≤ UD ≤ –4 dB. ΔXN], which allows the best approach to the desired dia-
gram Fd.
After several tests, we chose a back-propagation
network with the following topology: In our application, the desired diagram is specified
3 input neuron corresponding respectively to Gmin, from a mask, the database contains a whole of data (in-
Gmax and UD, put/output) obtained by simulation with the genetic algo-
12 neurons in the hidden layer, rithm. After several tests, a RBF network with the follow-
5 neurons in the output layer representing the ing topology (Fig. 12) was retained: The symmetrical an-
amplitude law of the first half of the array. tenna array is composed of 8 elements. The number of
training set is equal to 45, and the parameters’ variation
Fig. 11 shows the radiation diagram generated by the
following test vector: Gmin = 20°, Gmax = 52°, ranges are: – 27 dB ≤ SLL ≤ – 12 dB,
UD = –3.5 dB, the number of testing set is equal to 42. – 2.7 dB ≤ UD ≤ – 2 dB, 20° ≤ θ 0 ≤ 23° with Δθ= 6°.
ratory. His field of research is centered on the optimization Magister degree in Electronics (1997) and doctorate degree
process and their application on antenna array synthesis. from the Tlemcen University (2005). He is a Senior Lec-
turer at the Tlemcen University and Researcher within the
Fethi Tarik BENDIMERAD was born in Sidi Belabbes Telecommunications Laboratory. He worked on smart
(Algeria) in 1959. He received the diploma of state engi- array antenna.
neer in Electronics (1983) from the Oran University, Mas-
ter degree in Electronics (1984) and doctorate degree from Sidi Ahmed DJENNAS was born in Nédroma, Algeria in
the Nice Sophia-Antipolis University (1989). He worked as 1974. He received the State Engineer and Magister diplo-
a Senior Lecturer at the Tlemcen University and he is the mas in 1997 and 2001 from the Abou Bekr Belkaïd Uni-
Director of the Telecommunications Laboratory. His main versity, Tlemcen, Algeria. He is currently a Junior Lecturer
area of research is microwave techniques and radiation and at the same University. Also he is a Junior Researcher
he is responsible of the antenna section. within the Telecommunications Laboratory. He worked on
design and development of antenna analysis and synthesis
Sidi Mohammed MERIAH was born in Tlemcen (Alge- for communication and radar systems.
ria) in 1970. He received the diploma of state engineer in
Electronics (1992) from the National Polytechnic College,