You are on page 1of 51

Use of different Optimization Techniques for

Side Lobe Level Reduction of Linear and


Time Modulated Antenna Array

Prajat Paul
17EC8087

Under the Supervision of


Prof. Durbadal Mondal

Electronics and Communication Engineering Department


National Institute of Technology Durgapur

This dissertation is submitted for the degree of


Bachelor of Technology

NIT Durgapur May 2020


Acknowledgements

I have no words to express my gratitude and thanks to Dr. Durbadal Mandal (Associate
Professor, NIT Durgapur) for his invaluable advice and effective treatment, which is the psyche
of this study. I regard him as a wise counsellor, and I will continue to seek his advice in my
future endeavors. The study would not have been possible without the experience and
versatility of the professors of Electronics & Communication Engineering Department, so I
would like to express my heartfelt gratitude to all them for their valuable technical and moral
suggestion and also constant encouragement, without which this thesis report would not come
into existence.

I would like to express my special gratitude to Mr. Avishek Chakraborty (Research Fellow,
NIT Durgapur) for his constant supervision and support.

I'm also grateful to my parents for motivating me and providing me with the patience and
strength I needed to finish this project. I'd like to express my gratitude to all of my classmates
who have offered me numerous helpful suggestions over the years.
01 Introduction

Creating a seamless connectivity all across the globe for transmission of information and
knowledge has been a major and effective component in the modern era of advancement. This
allows people to collaborate and work together even when they are thousands of miles away.
The pandemic that is brought upon by the COVID-19 virus specifically requires social
distancing and yet many industries across the globe, the education system for students who
were studying abroad but were sent back home and many other online based works is still
functional. The process of development in creating this mega-network mainly focuses on
reducing overall transmission time, reach longer distances and making it more accessible for
people. These services are provided through a combination of wired and wireless connections,
based on geographical location and other aspects. Antennas are a major component in
completing the wireless connectivity. It is used to transmit and receive radio waves in the free
space. This necessity has encouraged the extensive and wide study of antenna design in various
areas and how the performance and efficiency can be increased in order to make its use more
effective.

Germans were the first to invent the antenna in the year 1888. It was designed as a medium
of wireless communication that has the capability to propagate both radio and microwave
signals.

Antenna is responsible for the transmission and collection of electromagnetic radiation. The
electrical signals are first collected from the transmission line and then converted into radio
signals. The same process is done on the receiving end in reverse order; it accepts the radio
waves and then transforms it into the electrical signals. The moving charges generate
electromagnetic waves. When the charge oscillates, the electric field also varies, and this
changing electric field generates displacement current. Maxwell’s equations are used in the
working of the device. When the electric field is generated, the magnetic field is also generated.
Ampere’s law is responsible for all the calculations regarding the electric and magnetic field
generation. The energy that enters in it is converted into the Electromagnetic waves. The
signal’s current radiates these waves. The conversion of the magnetic field into the electric
field is done by applying Faraday’s law [1].
It is the main component of the wireless communication system. Each type of it is specific for
the task and has its own pros and cons. All the communication that takes place around you
probably involves an antenna in it. Modern devices are also used in search of radars and defense
radars. These devices also play a huge role in curing breast cancer, which is a relatively new
area of research.

A single element antenna has wide ranged radiation pattern but low directivity which is not
ideal for cases which require directed power over a particular direction for long-distance
communication.

Antenna array is one of the simplest arrangement which can obtain a high directed radiation
pattern as several radiators with different phases and current excitation are grouped together.
This antenna array configuration can lead to higher signal strength and SNR, minor lobe
and power wastage is reduced. Antenna array architecture and synthesis accuracy play a critical
role in today’s communication systems. Over the last few decades, a great deal of research into
the design and synthesis of antenna arrays has been done in order to enhance the antenna’s
radiation pattern.
The antenna array parameters, such as the geometrical shape, influence the radiation pattern.
The array’s configuration, inter element spacing, and each element’s current excitation weights
element of an array. The increased traffic in the electromagnetic environment necessitates
the development of a new design, an antenna array with a narrower half-power beam width
(HPBW) and lower side lobe level (SLL).
In the following sections we will see how the use of different evolutionary algorithms reduce
the side lobe level and in turn results the reduction of interference in wireless data transmission.
Different algorithms have different effects and even though they differ in the amount of
efficiency, the goal is same for all of them.
02 Catalogue of Definitions

This section contains definitions for the most commonly used phrases in the literature. The
terms are used to describe the radiation patterns generated by various antenna arrays.
Antenna
The antenna is an electronic component of a transmission system that is used to transmit and
receive signals that contain information.
Antenna Array
Arrangement of multiple antennas in some geometrical configuration to obtain a given
radiation pattern.
Radiation Pattern
A graphical representation of distribution of antenna radiation in space.
Antenna Gain
Antenna gain is the intensity of the waves transmitted by the antenna. The designing of this is
done in such a way that its radiated power is maximum in the required direction.
Gain = (Directivity) * (Efficiency)
Directivity
The ratio of the radiation intensity in a given direction from the antenna to the radiation
intensity averaged over all the directions.
Polarization
The polarization of the antenna refers to the polarization of the fields that are transmitted by
the device. This is a very crucial concept when you’re performing the device to device
communication.
Side lobe level
A radiation lobe in any direction other than the direction of intended radiation.
Isotropic Antenna
An isotropic antenna is a hypothetical lossless antenna which radiates and receives equally well
in all direction. However it is ideal and not physically realizable.
Linear Antenna Array
A configuration of antenna elements placed along a straight line.
Time Modulated Antenna Array
A configuration of antenna elements equipped with RF switches to time modulate the excitation
distribution.
Abbreviations:

LAA – Linear Antenna Array

TMAA – Time Modulated Antenna Array

AF – Array Factor

PSO – Particle Swarm Optimization

HSA – Harmony Search Algorithm

SOA – Seeker Optimization Algorithm

SSA – Social Spider Algorithm

MFO – Moth Flame Optimization

SLL – Side Lobe Level

HPBW – Half Power Beam Width

FNBW – First Null Beam Width

DRR – Dynamic Range Ratio ~ Ratio of Max to Min Excitation Value


03 Evolutionary Algorithms

The elements of an antenna array may be arranged in a linear, three-dimensional, circular, or


spherical pattern. Many synthesis methods have focused on suppressing the SLL while
maintaining the main beam's gain over the last few decades. The directivity of uniformly
excited and evenly spaced antenna arrays is higher, but the SLL is normally higher. Non-
uniformly excited arrays with ideal excitations and ring radii have more degree of freedom and
can lower the peak side lobe with a smaller number of elements than uniformly excited arrays.
The problem is modeled as a simple optimization problem, but the optimizing parameters are
broad and discrete, resulting in highly non-linear, discontinuous, and non-differentiable
antenna array architecture array factors.

Classical optimization methods have several disadvantages such as i) highly sensitive to


starting points when the number of solution variables and hence, the size of the solution space
increases, ii) frequent convergence to local optimum solution or divergence or revisiting the
same suboptimal solution, iii) requirement of continuous and differentiable objective cost
function (gradient search methods), iv) requirement of the piecewise linear cost approximation
(linear programming) and v) problem of convergence and algorithm complexity (non-linear
programming). For the optimization of complex, highly non-linear, discontinuous, and non-
differentiable array factors of different antenna array designs, various heuristic search
evolutionary optimization techniques have been adopted. The techniques are described in the
next section.

3.1. Particle Swarm Optimization (PSO)


Particle Swarm Optimization (PSO) is a flexible, robust population-based stochastic
search/optimization technique with implicit parallelism, which can easily handle with non-
differential objective functions, unlike traditional optimization methods. Standard PSO is less
susceptible to getting trapped into local optima unlike GA, Simulated Annealing etc. Kennedy,
Eberhart and Shi developed the Standard PSO concept similar to the behaviour of a swarm of
birds. Standard PSO is developed through simulation of bird flocking in multidimensional
space. Bird flocking optimizes a certain objective function. Each agent knows its best value so
far (pbest). This information corresponds to the personal experiences of each agent.
Moreover, each agent knows the best value so far in the group (gbest) among pbests. Each
agent tries to modify its position using the following information.
 The distance between the current position and the pbest.
 The distance between the current position and the gbest.
Mathematically, velocities of the particles are modified according to the following equation.
Vi k 1  w *Vi k  C1 * rand1 *  pbesti  Sik   C2 * rand2 * gbest  Sik  (3.1.1)
where Vi k is the velocity of the ith particle at the kth iteration; w is the weighting function; C1 and

C2 are the positive weighting factors; rand1 and rand2 are the random numbers between 0 and

1; S ik is the current position of the ith particle at the kth iteration; pbest ik is the personal best of

the ith particle at the kth iteration; gbest k is the group best at the kth iteration. The searching point
in the solution space may be modified by the following equation.
Sik 1  Sik  Vi k 1 (3.1.2)
The first term of (2.1) involves the previous velocity of the agent. The second and third terms
are used to change the velocity of the agent. Without the second and third terms, the agent will
keep on ‘‘flying’’ in the same direction until it hits the boundary. Namely, it corresponds to a
kind of inertia and tries to explore new areas.

3.1.1 Steps of PSO

• Initialize the control parameters and np number of real coded particles for the total
randomly generated population; maximum iteration cycles.

• Evaluation of error fitness values by using the objective function for the current position
Si of each Particle.

• Computation of the initial personal best solution vector (pbest) and group best solution
vector (gbest).

• Modify the velocity and position of each particle according to (3.1.1) and (3.1.2).

• Computation of the updated fitness values and then, updating pbest vectors, and gbest
vector.

• Iteration continues until the maximum number of iteration cycles is reached or the
desired solution having optimal set coefficients are finally obtained.
3.2. Harmony Search Algorithm (HSA)

In the basic HS algorithm, each solution is called a harmony. It is represented by an n-


dimension real vector. An initial randomly generated population of harmony vectors is stored
in an HM. Then, a new candidate harmony is generated from all the solutions in the HM by
adopting a memory consideration rule, a pitch adjustment rule and a random re-initialization.
Finally, the HM is updated by comparing the new candidate harmony vector and the worst
harmony vector in the HM. The worst harmony vector is replaced by the new candidate vector
if it is better than the worst harmony vector in the HM. The above process is repeated until a
certain termination criterion is met. Thus, the basic HS algorithm consists of three basic phases.
These are initialization, improvisation of a harmony vector and updating the HM. Sequentially,
these phases are described below.

3.2.1 Initialization of the problem and the parameters of the HS algorithm

In general, a global optimization problem can be enumerated as follows: min f(x) s.t. xj ∈
[𝑝𝑎𝑟𝑎𝑗𝑚𝑖𝑛 , 𝑝𝑎𝑟𝑎𝑗𝑚𝑎𝑥 ], j = 1, 2,....., n where f(x) is the objective function,X = [x1, x2, ..., xn] is

the set of design variables; n is the number of design variables. Here, 𝑝𝑎𝑟𝑎𝑗𝑚𝑖𝑛 , 𝑝𝑎𝑟𝑎𝑗𝑚𝑎𝑥 are
the lower and upper bounds for the design variable xj , respectively. The parameters of the HS
algorithm are the harmony memory size (HMS) (the number of solution vectors in HM), the
harmony memory consideration rate (HMCR), the pitch adjusting rate (PAR), the distance
bandwidth (BW), and the number of improvisations (NI); NI is the same as the total number of
fitness function calls (NFFCs). It may be set as a stopping criterion.

3.2.2 Initialization of the HM

𝑗 𝑗 𝑗
The HM consists of HMS harmony vectors. Let Xj = [𝑥1 , 𝑥2 , ..., 𝑥𝑛 ] represents the jth harmony
vector, which is randomly generated within the parameter limits [𝑝𝑎𝑟𝑎𝑗𝑚𝑖𝑛 , 𝑝𝑎𝑟𝑎𝑗𝑚𝑎𝑥 ]. Then,
the HM matrix is filled with the HMS harmony vectors as in (3.2.1).

…(3.2.1)
3.2.3 Improvisation of a new harmony

A new harmony vector Xnew = (𝑥1𝑛𝑒𝑤 , 𝑥2𝑛𝑒𝑤 ,......, 𝑥𝑛𝑛𝑒𝑤 ) is generated (called improvisation) by
applying three rules viz. i) a memory consideration, ii) a pitch adjustment and iii) a random
selection. First of all, a uniform random number r1 is generated in the range [0, 1]. If r1 is less
than HMCR, the decision variable 𝑥𝑗𝑛𝑒𝑤 is generated by the memory consideration; otherwise,
𝑥𝑗𝑛𝑒𝑤 is obtained by a random selection (i.e., random re-initialization between the search
bounds). In the memory consideration, 𝑥𝑗𝑛𝑒𝑤 is selected from any harmony vector i in [1, 2,....,
HMS]. Secondly, each decision variable 𝑥𝑗𝑛𝑒𝑤 will undergo a pitch adjustment with a
probability of PAR if it is updated by the memory consideration. The pitch adjustment rule is
given as follows in (3.2.2):

𝑥𝑗𝑛𝑒𝑤 = 𝑥𝑗𝑛𝑒𝑤 ± 𝑟3 × 𝐵𝑊

…(3.2.2)

where r3 is a uniform random number between 0 and 1.

3.2.4 Updating of HM

After a new harmony vector, 𝑥𝑗𝑛𝑒𝑤 is generated, the HM will be updated by the survival of the
fitter vector betweenXnew and the worst harmony vector Xworst in the HM. That is, Xnew will
replaceXworst and become a new member of the HM if the fitness value of Xnew is better than
the fitness value of Xworst (for minimization optimization).

The pseudo codes of HS algorithm are as follows:

Step 1: Set the parameters HMS, HMCR, PAR, BW, NI and D.

Step 2: Initialize the HM and calculate the objective function value for each harmony vector.

Step 3: Improvise the HM filled with new harmony Xnew vectors as follows:

for (j = 0; j < D; j++)

if (r1 < HMCR) then

𝑥𝑗𝑛𝑒𝑤 = 𝑥𝑗𝑛𝑒𝑤 ± r3 × BW // r1, r2, r3 ∈ [0, 1]

end if

else
𝑥𝑗𝑛𝑒𝑤 =𝑝𝑎𝑟𝑎𝑗𝑚𝑖𝑛 + r × (𝑝𝑎𝑟𝑎𝑗𝑚𝑎𝑥 − 𝑝𝑎𝑟𝑎𝑗𝑚𝑖𝑛 ) // r ∈ [0, 1]

end if

end for

Step 4: Update the HM as Xworst = Xnew if J (Xnew) < J (Xworst).

Step 5: If NI is completed, return the best harmony vector Xbest in the HM; otherwise go back
to Step 3.

3.3. Seeker Optimization Algorithm (SOA)

Seeker Optimization Algorithm (SOA) is a population-based heuristic search algorithm. It


regards the optimization process as an optimal solution obtained by a seeker population. Each
individual of this population is called a seeker. The total population is randomly categorized
into three subpopulations. These subpopulations search over several different domains of the
search space. All the seekers in the same subpopulation constitute a neighborhood. This
neighborhood represents the social component for the social sharing of information.

3.3.1 Calculation of search direction, 𝒅𝒊𝒋 (t)

In the SOA, a search direction dij (t) and a step length αij (t) are computed separately for each
ith seeker on each jth variable at each time step t, where αij (t) ≥ 0 and dij (t) ∈ {−1, 0, 1}.
Here, i represents the population number and j represents the optimizing variable number. It is
the natural tendency of the swarms to reciprocate in a cooperative manner while executing their
needs and goals. Normally, there are two extreme types of cooperative behavior prevailing in
swarm dynamics. One, egotistic, is entirely pro-self and another, altruistic, is entirely pro-
group. Every seeker, as a single sophisticated agent, is uniformly egotistic. He believes that he
should go toward his historical best position according to his own judgment. This attitude of
ith seeker may be modelled by an empirical direction vector 𝑑⃗i, ego (t) as in (3.3.1).

⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑑𝑖,𝑒𝑔𝑜 (𝑡) = 𝑠𝑖𝑔𝑛 (𝑝
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗(𝑡)
𝑖,𝑏𝑒𝑠𝑡 − ⃗⃗⃗⃗
𝑥𝑖 (𝑡)

…(3.3.1)

In (3.3.1), sign (·) is a signum function on each variable of the input vector. On the other hand,
in altruistic behavior, seekers want to communicate with each other, cooperate explicitly, and
adjust their behaviors in response to the other seeker in the same neighborhood region for
achieving the desired goal. That means the seekers exhibit entirely pro-group behavior. The
population then exhibits a self-organized aggregation behavior of which the positive feedback
usually takes the form of attraction toward a given signal source. Two optional altruistic
directions may be modelled as in (3.3.2)-(3.3.3).

⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑑𝑖 𝑎𝑙𝑡1 , (𝑡) = 𝑠𝑖𝑔𝑛 (𝑔⃗𝑏𝑒𝑠𝑡(𝑡) − ⃗⃗⃗⃗
𝑥𝑖 (𝑡))

…(3.3.2)

⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑑𝑖𝑎𝑙𝑡2 , 𝑎𝑙𝑡2 (𝑡) = 𝑠𝑖𝑔𝑛 (𝑙⃗𝑏𝑒𝑠𝑡(𝑡) − ⃗⃗⃗⃗
𝑥𝑖 (𝑡))

...( 3.3.3)

In (3.3.2)-(3.3.3), 𝑔⃗ best (t) represents neighbors’ historical best position, 𝑙⃗ best (t) means
neighbors’ current best position. Moreover, seekers enjoy the properties of pro-activeness;
seekers do not simply act in response to their environment; they are able to exhibit goal-directed
behavior. In addition, the future behavior can be predicted and guided by the past behavior. As
a result, the seeker may be pro-active to change his search direction and exhibit goal-directed
behavior according to his past behavior. Hence, each seeker is associated with an empirical
direction called as pro-activeness direction as given in (3.3.4).

⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑑𝑖,𝑝𝑟𝑜 (𝑡) = 𝑠𝑖𝑔𝑛 ((𝑥
⃗⃗⃗⃗(𝑡1)
𝑖 𝑥𝑖 (𝑡2))
− ⃗⃗⃗⃗

…(3.3.4)

In (2.52), t1, t2 ∈ {t, t−1, t−2} and it is assumed that ⃗⃗⃗⃗


𝑥𝑖 (t1) is better than𝑥
⃗⃗⃗⃗𝑖 (t2). Aforementioned
four empirical directions as presented in (3.3.1)-(3.3.4) direct human being to take a rational
decision in his search direction. If the jth variable of the ith seeker goes towards the positive
direction of the coordinate axis, dij (t) is taken as +1. If the jth variable of the ith seeker goes
towards the negative direction of the coordinate axis, dij (t) is assumed as -1. The value of dij
(t) is assumed as 0 if the ith seeker stays at the current position. Every variable j of ⃗⃗⃗⃗
𝑑𝑖 (t) is
selected by applying the following proportional selection rule as stated in (3.3.5).

(0)
0, 𝑟𝑗 ≤ 𝑝𝑗
(0) (0) (1)
𝑑𝑖𝑗 = +1, 𝑝𝑗 ≤ 𝑟𝑗 ≤ 𝑝𝑗 + 𝑝𝑗
(0) (+1)
−1, 𝑝𝑗 + 𝑝𝑗 < 𝑟𝑗 ≤ 1
{

…(3.3.5)
(𝑚)
In (3.3.5), 𝑟𝑗 is a uniform random number in [0, 1], 𝑝𝑗 (m ∈ {0, + 1 − 1} is the percent of the
numbers of “m” from the set {dij,ego, dij,alt1, dij, alt2, dij, pro} on each variable j of all the four
empirical directions, i.e. p ( m ) j = (the number of m) / 4.

3.3.2 Calculation of step length, 𝜶𝒊𝒋 (t)

Different optimization problems often have different ranges of fitness values. To design a fuzzy
system to be applicable to a wide range of optimization problems, the fitness values of all the
seekers are turned into the sequence numbers from 1 to S as the inputs of fuzzy reasoning. The
linear membership function is used in the conditional part since the universe of discourse is a
given set of numbers, i.e. 1, 2,........, S . The expression is presented as in (3.3.6).

𝑆 − 𝐼𝑖
µ𝑖 = µ𝑚𝑎𝑥 − (µ − µ𝑚𝑖𝑛 )
𝑆 − 1 𝑚𝑎𝑥

…(3.3.6)

In (3.3.6), Ii is the sequence number of ⃗⃗⃗⃗


𝑥𝑖 (t) after sorting the fitness values, µmax is the
maximum membership degree value which is equal to or a little less than 1.0. Here, the value
of µmax is taken as 0.95.

A fuzzy system works on the principle of control rule as “If {the conditional part}, then {the
−𝑥2
action part}. Bell membership function µ (x) = 𝑒 2δ2 is well utilized in the literature to represent
the action part. The membership degree values of the input variables beyond [−3δ, + 3δ] are
less than 0.0111(µ (± 3δ) = 0.0111), and the elements beyond [−3δ, + 3δ] in the universe of
discourse can be neglected for a linguistic atom. Thus, the minimum value µmin = 0.0111 is
set. Moreover, the parameter, 𝛿⃗ of the membership function is determined by (3.3.7).

𝛿⃗ = ω × abs (𝑥
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑏𝑒𝑠𝑡 − ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗)
𝑥𝑟𝑎𝑛𝑑

…(3.3.7)

In (3.3.7), the absolute value of the input vector as the corresponding output vector is
represented by the symbol abs (.). The parameter ω is used to decrease the step length with
increasing time step so as to gradually improve the search precision. In the present experiment,
ω is linearly decreased from 0.9 to 0.1 during a run. The ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑥𝑏𝑒𝑠𝑡 and ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑥𝑟𝑎𝑛𝑑 are the best seeker
and a randomly selected seeker, respectively, from the same subpopulation to which the ith
seeker belongs. It is to be noted here that ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗ 𝑥𝑏𝑒𝑠𝑡 and 𝛿⃗ is shared by all
𝑥𝑟𝑎𝑛𝑑 is different from ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
the seekers in the same subpopulation. In order to introduce the randomness in each variable
and to improve local search capability, the following equation is introduced to convert µi into
a vector ⃗⃗⃗⃗
µ𝑖 with elements as given by (3.3.8).

µ𝑖𝑗 = 𝑅𝐴𝑁𝐷 (µ𝑖 , 1)

…(3.3.8)

In (3.3.8), RAND (µ𝑖 , 1) returns a uniformly random real number within [µi, 1]. Equation
(3.3.9) denotes the action part of the fuzzy reasoning and gives the step length (αij) for every
variable j.

𝛼𝑖𝑗 = 𝛿𝑗 √− ln(𝜇𝑖𝑗 )

…(3.3.9)

3.3.3 Updating of seekers’ positions

In a population of size S, for each seeker i (1 ≤ I ≤ S), the position update on each variable j is
given by (3.3.10).

𝑥𝑖𝑗 (𝑡 + 1) = 𝑥𝑖𝑗 (𝑡) + 𝛼𝑖𝑗 (𝑡) × 𝑑𝑖𝑗 (𝑡)

…(3.3.10)

3.3.4 Subpopulations learn from each other

Each subpopulation is searching for the optimal solution using its own information. It hints that
the subpopulation may trap into local optima yielding a premature convergence.
Subpopulations must learn from each other about the optimum information so far they have
acquired in their respective domain. Thus, the position of the worst seeker of each
subpopulation is combined with the best one in each of the other subpopulations using the
following binomial crossover operator as expressed in (3.3.11).

𝑥𝑙𝑗,𝑏𝑒𝑠𝑡 , 𝑟𝑎𝑛𝑑𝑗 ≤ 0.5


𝑥𝑘𝑛𝑗,𝑤𝑜𝑟𝑠𝑡 = {
𝑥𝑘𝑛𝑗,𝑤𝑜𝑟𝑠𝑡 , 𝑒𝑙𝑠𝑒

…(3.3.11)

In (3.3.11), 𝑟𝑎𝑛𝑑𝑗 is a uniformly random real number within [0, 1], 𝑥𝑘𝑛𝑗,𝑤𝑜𝑟𝑠𝑡 is denoted as the
jth variable of the nth worst position in the kth subpopulation, 𝑥𝑙𝑗 , best is the jth variable of the
best position in the lth subpopulation. Here, n, k, l =1, 2,......., K − 1 and k 6= l. In order to
increase the diversity in the population, good information acquired by each subpopulation is
shared among the other subpopulations.

The steps of SOA are as follows:

Step 1: Initialize the positions of np seekers in the search space randomly and uniformly.

Step 2: Set the time step k = 0.

Step 3: Compute the cost functions/objective function of the initial positions. The initial
historical best position among the population is achieved. Set the personal historical
best position of each seeker to his current position.

Step 4: Let k = k + 1.

Step 5: Select the neighbor of each seeker.

Step 6: Determine the search direction and step length for each seeker, and update his position.

Step 7: Update the position of each seeker.

Step 8: Compute the objective function for each seeker.

Step 9: Update the historical best position among the population and historical best position of
each seeker.

Step 10: Subpopulation learns from each other.

Step 11: Repeat from Step 4 till the end of the maximum iteration cycles/stopping criterion.

Step 12: Determine the best string (array index) corresponding to the optimum objective
function value.

Step 13: Determine the optimal optimizing variables corresponding to the grand optimum
objective function value.

3.4. Social Spider Algorithm (SSA)

The Social Spider Optimization (SSO) [2] proposed by Erik Cuevas et al., in 2013, is a
population-based algorithm that simulates the cooperative behavior of the social spider. SSO
considers two search agents (spiders): male and female. Each individual is conducted with a
different set of evolutionary operators depending on it gender which mimics different
cooperative behaviors typically found in the colony. This individual categorization allows
reducing critical flaws present in several SI approaches such as incorrect exploration-
exploitation balance and premature convergence. It evades the concentration of particles in the
best positions, preventing critical faults as premature convergence to suboptimal solutions or a
limited balance between exploration and exploitation. These characteristics have motivated the
use of the SSO algorithm to solve a wide variety of engineering applications in diverse areas
including machine learning: Artificial Neural Networks Training [3, 4] and Support Vector
Machine Parameter Tuning [5, 6]; control: Fractional Controller Design [7–9] and frequency
controllers [10]; image processing: Image Multilevel Thresholding [11], Image Contrast
Enhancement [12], and Image Template Matching [13], and energy: Distribution of Renewable
Energy [14], Congestion Management [15], and Anti-Islanding Protection [16].

The search space of SSA optimization problem is formulated as a hyper-dimensional spider


web. Spiders are the agents of SSA optimization problem, and they can move freely on the
web. Each spider on the web holds a position and each position on the web represents a feasible
solution based on the fitness function. A predefined number of spiders on the web generate
vibration which is transmitted over the web. It is observed that spiders have a very accurate
sense of vibration. It is also observed that spiders can distinguish different vibrations
propagated on the same web and they can also sense their respective intensities. In SSA, spiders
move randomly on the web and generate a vibration after reaching a new position different
from the previous position. The intensity of the vibration is correlated with the fitness of the
position. The vibration is transmitted over the web, and it is sensed by the other spiders on the
web. The position with better fitness value has a larger vibration intensity than those with worse
fitness values when the fitness value reaches to the global optimum; the vibration intensity
would not vary excessively. In this way, the spiders on the same web share their personal
experiences with others to form a collective social knowledge. This behaviour of spiders is
mathematically modelled in the SSA algorithm. The main steps of the SSA algorithm are given
as a flowchart in Figure 1.

The key features of the SSA algorithm are discussed below.


Start

Specify the parameters of SSA. Maximum iteration cycle (G) and Desired accuracy. Set iteration count K=1.

Generate initial population of spiders and assign memory for them.

Initialize intensity of target vibration as (2.62) for each spider.

Evaluate the fitness of each spider. Generate a vibration of each spider at its current position.

Evaluate the intensity of all the vibrations generated by the spiders and select the strongest vibration.

No
Intensity of the strongest vibration (V S best) >
Intensity of target vibration (V S tar)

Yes

V S tar = V S best

Determine the dimension of mask of each spider by (2.64).

Generate a new following position based on the mask of each spider.

Perform a random walk (2.65) and constraint handling (2.66) of each spider.

No
K > G; Accuracy <
K = K +1 Desired Accuracy

Yes

Stop

Figure 1: Flowchart for SSA optimization.


3.4.1 Spiders

The agents of SSA optimization are spiders. The algorithm starts with a fixed number of spiders
positioned over the hyper dimensional spider net. Each spider on the net has a memory space
used to store its current position and the corresponding fitness value on the web. The memory
also consists of the target vibration in the previous iteration, preceding movement and a
dimensional mask. The spider in the web generates vibration when it reaches a new position.
The intensity of vibration depends on the fitness value at that position. The spiders on the web
can sense this vibration when one of the spiders reaches a better position on the web. In this
way, the spiders in the web share their personal experiences in a collective social manner.

3.4.2 Vibration

The generation and detection of vibration by spiders on the web are essential concepts in SSA.
In SSA, the vibration is characterized based on the two properties of vibration, namely, the
vibration on the source position and the vibration related to the source intensity. The source
position is defined by the hyper-dimensional search space on the web. The intensity of vibration
is correlated with the fitness value in the source position, which is varied in the range of [0, α].

The intensity of vibration at a particular source position is defined as

 
I PS , PS , t   log
1 (3.4.1)
 1
 f  PS   C 

where I PS , PS , t  represents the intensity of the vibration at the source position and f PS  is

defined as the fitness at the source position. C is a small constant, which is smaller than all the
possible fitness values on the web. The intensity of vibration is related to the source position.
The positions with better fitness values have larger vibration intensities.

The distance between two source positions ‘a’ and ‘b’ in the web is defined as DPa , Pb  . The
vibration intensity attenuates over the distance as follows:

 DPa , Pb  
I Pa , Pb , t   I Pa , Pb , t   exp   (3.4.2)
   ra 

where ra  0,   is a user control parameter and  is the standard deviation of all the spiders

positioned along each dimension.


3.4.3 Search Pattern

The three phases of the SSA algorithm are initialization, iteration and final. These phases are
described as a flowchart in Figure 2.7. In the random walk of spider in the ith dimension, the
mask is determined in position Ps ,foi as given in (3.4.3).

P tar ; m  0 (3.4.3)
Ps ,foi   s ,ir s ,i
 Ps ,i ; ms ,i  1

where r is the random integer value r  1, pop . ms ,i is the ith dimension of the dimension mask

m of spider s. The random walk of spider follows (3.4.4).


PS t  1  PS  PS  PS t  1  r  PSfo  PS  R  (3.4.4)

where R is a vector of random float point number R  0,1 .

The spider approaches PSfo along each dimension with a random factor generated in (0,1). This
random factor is generated independently for different dimensions. At the time of the random
walk, the spiders may move out of the web, which causes the constraints of the optimization
problem. The method of constraints handling is the final step of SSA.

The boundary of the constraints free position in the ith dimension is Ps ,i t  1 .

xi  Ps ,i  r; ifPs ,i (t  1)  xi
Ps ,i t  1  
 
 Ps ,i  x i  r; ifPs ,i  xi
(3.4.5)

where xi is the upper bound of the search space in ith dimension and xi is the lower bound and

r is a random floating-point number r  0,1 .

3.5. Moth Flame Optimization (MFO)

Moth-Flame Optimization (MFO) algorithm was proposed in 2016 [17] as one of the seminal
attempt to simulate the navigation of moths in computer and propose an optimization
algorithm. This algorithm has been widely used in science and industry.Moths are fancy insects
which come under the family of butterflies. It is observed that the moths converge towards the
light. They utilized a mechanism called transverse orientation for navigation. In this method, a
moth flies by maintaining a fixed angle with respect to the moon, a very effective mechanism
for travelling long distances in a straight path [18].This behaviour of moths is modelled
mathematically in MFO algorithm. In the MFO algorithm, the candidate solution is moths, and
the positions of the moths in the search space are the problem’s variables. Moths can fly in any
dimensional space by changing their positional vector. The set of moths in the MFO algorithm
is represented in a matrix form as given in (3.5.1).

m1,1 m1, 2 ............ m1,d 


 
m2,1 m2 , 2 ............ m2,d 
M 
..... 
 
mn,1 mn , 2 ............ mn,d 
(3.5.1)

where n and d are the numbers of moths and the variables (dimensions), respectively. The
fitness value is stored in an array for all the moths and is given in (3.5.2).

OM 1 
OM 
OM   2

... 
 
OM n  (3.5.2)

where n is the number of moths. The fitness value is the return value of the fitness (Cost)
function for each moth. The position vector (matrix M) of each moth is passed to the fitness
function, and its output is assigned as its fitness value (the matrix OM) to the corresponding
moth.

Another essential part of the MFO algorithm is flame. A similar matrix of flames is constructed
like the matrix of moths.

 F1,1 F1, 2 ............ F1,d 


 
F2,1 F2, 2 ............ F2,d 
F
..... 
 
 Fn ,1 Fn , 2 ............ Fn ,d 
(3.5.3)

where n and d are the numbers of moths and variables (dimensions), respectively. The
dimensions of M and F arrays are equal. The fitness values of the flames are stored
correspondingly in an array as given in (3.5.4).
OF1 
OF 
OF   2 
... 
 
OFn  (3.5.4)

where n is the number of moths. In the MFO algorithm, both the moths and flames are
considered to be the solutions. The main difference between the moths and the flames is the
process in which they are updated in each iteration. The actual search agents are the moths
which move around the search space, whereas, the best positions obtained by the moths are
represented by flames. The flames can be treated as flags where each moth searches around a
flag and updates its position when it finds a better solution. With this method, a moth never
loses its track from the best solution.

The two main categories of optimisation technique are an individual-based algorithm and
population-based algorithm. Individual-based algorithm suffers from premature convergence,
which prevents the algorithm from converging towards the global best position. The
population-based algorithms like MFO have high ability to avoid local optima because a set of
solutions are involved in the optimisation. Also, the advantage of the MFO algorithm is that
the information is exchanged between the candidate solutions which assist them in overcoming
different difficulties of the search space.

The reputation of the MFO algorithm is due to several reasons:

 The position update mechanism of moths towards the flames mostly promotes
exploitation.
 Adaptive convergence constant (r) accelerated exploitation around the flames
throughout the iteration.
 Local optima avoidance is high because MFO employs a set of solutions to perform the
optimisation.
 The exploration and exploitation of the search space are appropriately balanced by
decreasing the number of flames.
 The best solutions are saved, so the solutions never get lost.
 In the MFO algorithm, the moths always update their position with respect to flames.
The flames are the most promising solutions obtained so far throughout the iteration. In
this way, the MFO algorithm reaches the global best solutions.
3.5.1 Steps of MFO

MFO algorithm consists of three steps by which it reaches the near-global optimal point. The
three parts of the algorithm are as follows:

MFO = (I,P,T) (3.5.5)

I generates a random population of moth and the corresponding fitness values.

I  M , OM  (3.5.6)

P is the main function by which moths move around the search space. P receives the matrix of
M and returns its update.

P:M M (3.5.7)

T function returns true if the termination criteria are satisfied and returns false if the termination
criteria are not satisfied.

T : M  true, false (3.5.8)

The function I has to generate an initial solution and calculates the objective function value.

for i =1:n

for j=1:d

M(i,j) = [ub(i)-lb(i)]*rand( ) + lb(i);

end

end

OM= Fitness Function (M);

where ub and lb are the upper and lower bounds of the variables, respectively.

ub=[ub1,ub2,..............,ubn] (3.5.9)

lb=[lb1,lb2,.............,lbn] (3.5.10)

The next step after initialization is to run the P function iteratively until it reaches T function.
The P function is the primary function for which the moths move around the search space and
update their positions with respect to the flame using (3.5.12).
M i  S (M i , Fj ) (3.5.11)

where M i is the ith moth, and F j is the jth flame. S is the logarithmic spiral function.

Start

Specify the parameters of MFO, Maximum iteration cycle (T) and desired accuracy. Set iteration count K = 1

Generate initial population of moths and flames by using (2.37)

Evaluate the fitness of moths and flames and store the values in an array correspondingly as per
(2.42)

Update the number of flames and store the fitness values as per (2.43) in an array
correspondingly

Update the list of flames and the flames are sorted as given in (2.45) based on their fitness values

Update the position of moths with respect to their corresponding flames

No
K = K +1 K > G and Accuracy <
Desired accuracy

Yes

Stop

Figure 2: Flowchart of MFO algorithm


The logarithmic spiral function defined for the MFO algorithm is as follows:

S M i , F j   Di .e bt . cos2t   F j
(3.5.12)
where Di indicates the distance between the ith moth and the jth flame, b is a constant to define

the shape of the logarithmic spiral, and t is a random number between -1 to 1.

D is calculated as given in (3.5.13).

Di  F j  M i
(3.5.13)

where M i is the ith moth and F j is the jth flame. Di is the distance between an ith moth and jth

flame.

The position of the moth is updated with respect to the flame as per (3.5.12).

The position updating of moth in n different locations in the search space may cause the
exploitation of the best promising solution, therefore, the number of flames is decreased
adaptively with the number of iteration as per (3.5.14).

 N 1
FlameNo  rand  N  l * 
 T  (3.5.14)

where l is the current number of iteration; N is the maximum number of flames; and T is the
termination criteria, i.e., the maximum number of iteration.

The P function is executed until the T function returns true.

The above mentioned evolutionary algorithms have been invented throughout the time and
found useful in improving the performance and efficiency of the antenna array. Linear antenna
array and time modulated antenna array shows improvement when these algorithms are applied
to optimize their performance.
04 Linear Antenna Array

Linear antenna array is a distribution of antennas, with uniform or non-uniform spacing, in


which the elements are geometrically identical, energized at same situated points and of similar
orientation. The 1st two mentioned properties insure that the current distribution form remains
the same for all the existing elements and the composition of arrays have the same radiation
pattern. If similarly situated points on the elements are collinear, then the array is linear [19-
21].

Linear antenna arrays (LAA) optimization has been a long known problem and
various researchers have worked in this domain. PSO was used by Recioui has for SLL
reduction of LAA [22]. Khodier and Christodoulou have established a comparative analysis
between PSO and quadrature programming method (QPM) synthesized LAA for maximum
SLL reduction and null placement [23]. Cengiz and Tokat used GA, memetic algorithm (MA)
and tabu search (TS) to optimize three different LAA [24].Singh et al. [25] have designed both
uniform LAA and non-uniform LAA using BBO and PSO and found that BBO performed
better than PSO. Rajo-lglesias and Quevedo-Teruel [26] used ant-colony algorithm for design
of LAA for side lobe reduction and null placement.Guney and Onay performed SLL reduction
and null placement utilized harmony search algorithm [27].PSO was used by LAA. Khodier
and Al‐Aqee[28] for design LAA.Authors in the paper have used a modern algorithm called
cuckoo optimization algorithm to optimize three different non uniform LAA[29]. S. K. Goudos
and V. Moysiadou have used an enhanced version of PSO called comprehensive learning
particle swarm optimization for optimizing the design of three different LAA [30]. Urvinder
Singh and Rohit Salgotra used flower pollination algorithm and enhanced flower pollination
algorithm for pattern synthesis of non-uniform linear array [31].

Figure 1 shows an even number of isotropic array elements, 2 N (where N is an integer),


arranged symmetrically along the z-axis. The array elements are separated by d =λ/2, and N
elements are positioned on either side of the center [32].
Figure 3: Uniform amplitude N-element linear array with equal spacing

The LAA parameters that influence its effect on the array factor (AF) are number of elements,
excitation amplitude and inter-elemental space. These are variables that have a significant role
to play in case of the demonstration of beam steering and beam forming of the LAA. Assuming
that each succeeding element has a β progressive phase lead current excitation relative to the
preceding one. Each of the elements is spaced at the distance half of its wavelength and they
are isotropic in nature. Summation of each individual element would result in the array factor
(AF) [33]. Since the amplitude excitation is symmetrical around the origin, the array factor
(AF) of Figure 3 is,

𝐴𝐹(𝜃) = 1 + 𝑒 𝑗(𝑘𝑑𝑠𝑖𝑛𝜃+β) + 𝑒 j2(𝑘𝑑𝑠𝑖𝑛𝜃+β) + ⋯ + 𝑒 𝑗(𝑁−2)(𝑘𝑑𝑠𝑖𝑛𝜃+β) + 𝑒 𝑗(𝑁−1)(𝑘𝑑𝑠𝑖𝑛𝜃+β)


⋯ (4.1)
Here, k is the propagation constant (k = 2π/λ) and θ is the incident angle against the horizontal
line. Multiplying both sides of the equation with Ψ = kdsinθ +β, the reference point being the
physical center of the array, reduced equation of the normalized array factor is,

𝑁
sin( 2 Ψ)
𝐴𝐹(𝜃) =
1
Nsin(2 Ψ)

⋯ (4.2)

If calculated and observed, then the maximum value is attained when Ψ = 0 or sinθ = 0, if there
is no progressive phase lead (β = 0), so θ = ± nπ, where n is an integer. However the maximum
value is controlled by the phase delay between the elements. Deriving the progressive phase
lead we get β = -kdsinθ.

Antenna arrays are designed to produce a beam at a particular direction, keeping the side lobe
level (SLL) small. This reduces the interference with other radiation sources. To achieve this
the magnitude and phase of the excitation amplitudes are controlled [34-36]. Three types of
non-uniform amplitude and spacing linear array distribution that can reduce that interference.
They are Binomial, Dolph-Chebyshev and Taylor array distribution. Each has a unique process
of reducing the interference. The array factor of a non-uniform amplitude and spacing linear
array is as follows:

𝑁−1

𝐴𝐹(𝜃) = ∑ 𝑤𝑛 𝑒 𝑗𝑛(𝑘𝑑𝑠𝑖𝑛𝜃+β)
𝑛=0

⋯ (4.3)

Here wn represents the amplitudes of array excitation.

Any antenna array's directivity D (θ, φ) is described by,

𝑃(θ, φ)
𝐷(θ, φ) =
1 ∞
(4𝜋) ∫4𝜋 𝑃(θ, φ) 𝑑Ω

⋯ (4.4)

P (θ, φ) is the power pattern, and dΩ is a solid angle element. The power pattern for a linear
uniform array of elements on the z axis (θ = 0) of a rectangular coordinate system is given by
[37]
𝑃(θ, φ) = 𝑓 2 (θ, φ) 𝐴𝐹 2 (θ)

⋯ (4.5)

Where f (θ, φ) is the individual element's radiation pattern and AF (θ) is the array factor.
05 Time Modulated Antenna Array

We already established that by using this above methods and take me that were proposed above
we were able to get very satisfactory results in the designed array, the antenna array
performance was improved, its sidelobe level (SLL) is decreased and directivity is increased.

But, the realization of the designed excitations by using conventional approaches, such as
tapered amplitude distributions and amplitude attenuators, is very challenging as any small
inaccuracy in the design will cause unwanted deviations in the SLL [38].
Two significant characteristics, pattern synthesis and real time electronic beam scanning
requires a complicated feed network design. In addition, expensive digital phase shifters are
essential components in a phased array system to achieve electronic beam steering. These can
only be found in the specialised military system.
This limitations has motivated the research into developing a simple and low cost array system
for the commercial application.
A good approach for this problem is the time modulating linear array (TMLA), also called 4-
D antenna array. Shanks and Bickmore [39] in the late 1950s proposed this idea of time
modulation. Using a simple configuration of radiating slots, ferrite switches and square-wave
generator this concept was demonstrated. In 1961, Shanks established a mathematical model
of achieving electronic beam scanning property without the use of phase shifters and applied
this analysis into a phased array system. Kummer et al. [40] has shown that ultra-low sidelobe
levels of -39.5dB below the main beam for a 8-element time modulated slot array as well as
[41] the possibility of realizing real time electronic beam steering by periodically modulating
each array element for a 20-element slotted array at X-band can be achieved based on this
investigation. Lewis and Evans [42] also developed a theoretical model for reducing
interference from entering the sidelobes of radar by moving the phase centre of the receiver
array in a radar system.
The idea of TMLA is to use the time as an additional degree of freedom in the design by using
radio-frequency switches that periodically modulate the elements with an electronic control
circuit. Through which the amplitude weighting functions of conventional antenna array can
be synthesised in a time-average sense.
Through this process of periodic time modulation, harmonics or sidebands are generated at
multiples of the switching frequency, and from this desired antenna patterns with controlled
sidelobe levels can be obtained after proper filtering process. Harmonics are generally not
wanted because they waste power, but some applications are there which uses such harmonics
for different purposes.
This concept of TMLA reduces the effects of errors because the on-off switching or to be
specific the use of time as an additional parameter is more flexible and can be controlled at a
very high accuracy level to obtain the array weights than as when compared with the
conventional method.
Now-a-days, this concept of time modulated antenna arrays (TMAA) is being studied
extensively among various researchers [43-60]. Many previous work has focused on exploring
the interests and potential applications of TMAA in many areas that can be found in [61-73].

Our main objective has always been the sideband power reduction of the antenna array pattern,
and thus the involvement of time provides a more precise and effective way to control sidelobe
level, it also simplifies the constraints of mechanical design.
Here, is a simple illustration of an N-element time modulated linear array in receiver mode

Figure 4: N-element time modulated linear array structure in the receive mode.

We know, with respect to Figure 4, the array factor of conventional linear array with uniform
inter-element spacing for a narrowband plane wave of an angular frequency ω incident on the
array at an angle of θ with respect to the broadside direction, is expressed as:
𝑁−1

𝐴𝐹(𝜃, 𝑡) = 𝑒 𝑗𝜔𝑡 ∑ 𝑤𝑛 𝑒 𝑗𝑘𝑛𝑑 sin 𝜃


𝑛=0
⋯ (5.1)
Now let us assume the array weights wn are a periodic function of time with a period T0 much
greater than the RF signal period T = 2π/ω. As we know, any periodic signal can be
decomposed as the sum of infinite oscillating functions, so wn expressed in the Fourier series
form as:

𝑤𝑛 (𝑡) = ∑ 𝑎𝑚𝑛 𝑒 𝑗𝑚𝜔0 𝑡


𝑚=−∞
⋯ (5.2)

Here, ω0 = 2π/T0 << ω is the angular modulation frequency, since ω0 closes to ω will cause
serious interference between the carrier and fundamental frequency. An easier method to
realise time modulation is to periodically turn on and off one or more array elements through
high speed RF switches, hence the array weights or weighting functions wn are defined by:
1, 0 ≤ τ𝑛𝑜𝑛 < t < τnoff ≤ 𝑇0
𝑤𝑛 (𝑡) = {
0, 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒
⋯ (5.3)

Where τnon and τnoff are the on and off time of the nth elements, T0 is the switching period and
so accordingly,
𝑁−1 ∞ 𝑁−1

𝐴𝐹(𝜃, 𝑡) = 𝑒 𝑗𝜔𝑡 ∑ 𝑤𝑛 (𝑡)𝑒 𝑗𝑘𝑛𝑑 sin 𝜃 = ∑ 𝑒 𝑗(𝜔+𝑚𝜔0 ) 𝑡 ∑ 𝑎𝑚𝑛 𝑒 𝑗𝑘𝑛𝑑 sin 𝜃


𝑛=0 𝑚=−∞ 𝑛=0
⋯ (5.4)

Where 𝑎𝑚𝑛 can be denoted by,

𝑇0
1
𝑎𝑚𝑛 = ∫ 𝑤𝑛 (𝑡) 𝑒 −𝑗𝑚𝜔0 𝑡 𝑑𝑡
𝑇0
0
⋯ (5.5)

Now substituting eq(5.3) into eq(5.5),

τnoff
1
𝑎𝑚𝑛 = ∫ 𝑒 −𝑗𝑚𝜔0 𝑡 𝑑𝑡
𝑇0
τ𝑛𝑜𝑛
⋯ (5.6)

Solving this further we come up with:


sin[𝑚𝜋(τ𝑛𝑜𝑓𝑓 − τnon )]
𝑎𝑚𝑛 = (τ𝑛𝑜𝑓𝑓 − τnon ) 𝑒 −𝑗𝜋𝑚(τ𝑛𝑜𝑓𝑓 +τnon )
𝑚𝜋(τ𝑛𝑜𝑓𝑓 − τnon )
⋯ (5.7)

From this equation, we notice that with respect to (ω+mω0) that one Fourier coefficient
corresponds to one angular frequency. Generally, the beam pattern at the fundamental and
angular frequency (ω0) is of interest. So, for m=0 the equation (1.14) reduces to:
𝑁−1
𝑗𝜔𝑡
𝐴𝐹(𝜃, 𝑡) = 𝑒 ∑ 𝑎0𝑛 𝑒 𝑗𝑘𝑛𝑑 sin 𝜃
𝑛=0
⋯ (5.8)

and,
τnoff
1
𝑎0𝑛 = ∫ 𝑤𝑛 (𝑡)𝑒 −𝑗𝑚𝜔0 𝑡 𝑑𝑡
𝑇0
τ𝑛𝑜𝑛
⋯ (5.9)

By substituting accordingly we get,

(τ𝑛𝑜𝑓𝑓 − τnon )
𝑎0𝑛 =
𝑇0
⋯ (5.10)

and thus, the equation (5.8) simplifies to:

𝑁−1
𝑗𝜔𝑡
(τ𝑛𝑜𝑓𝑓 − τnon ) 𝑗𝑘𝑛𝑑 sin 𝜃
𝐴𝐹(𝜃, 𝑡) = 𝑒 ∑ 𝑒
𝑇0
𝑛=0
⋯ (5.11)

(τ𝑛𝑜𝑓𝑓 −τnon )
The term in the equation (1.21) is the effective on-time of each array element and
𝑇0

is same as the static amplitude weights for a conventional array. Thus, this technique allows
conventional array amplitude weighting functions to be accomplished in a time-average sense
at the fundamental frequency. Switching an element on for a given period of time corresponds
to its excitation coefficients. So, distribution functions such as Chebyshev and Taylor can be
easily achieved through this technique.
To calculate the directivity of time modulated linear antenna arrays, the total power radiated
should include the power at the centre frequency and the powers of all the sideband
components.

Therefore, the directivity [74] is given by:

|𝐴𝐹0 (𝜃0 , 𝜙0 )|2


𝐷𝑇𝑀𝐿𝐴 =
1 ∞ 2𝜋 𝜋
∑ |𝐴𝐹𝑚 (𝜃, 𝜙)|2 sin 𝜃𝑑𝜃𝑑𝜙
4𝜋 𝑚= −∞ ∫0 ∫0

⋯ (5.12)

Where, 𝜃0 denotes the direction in which 𝐴𝐹(𝜃0 , 𝜙0 ) has the maximum value; and

𝐴𝐹𝑚 (𝜃0 , 𝜙0 ) denotes the radiation pattern at the mth order sideband frequency.

The power radiated by the time modulated linear array 𝑃0 at the fundamental (centre) frequency
is given by Eq. (5.13).

2𝜋 𝜋
𝑃0 = ∫ ∫ |𝐴𝐹0 (𝜃, 𝜙)|2 sin 𝜃𝑑𝜃𝑑𝜙
0 0

⋯ (5.13)

The power radiated by the time modulated linear array 𝑃𝑆𝑅 at the harmonic frequency is given
by Eq. (5.14).

∞ 2𝜋 𝜋
𝑃𝑆𝑅 = ∑ ∫ ∫ |𝐴𝐹𝑚 (𝜃, 𝜙)|2 sin 𝜃𝑑𝜃𝑑𝜙
𝑚= −∞ 0 0
𝑚 ≠0

⋯ (5.14)
06 Results & Discussion

6.1. Different evolutionary algorithms applied for linear antenna array

Here we have applied the above mentioned algorithms (PSO, HSA, SOA, SSA, and MFO) for
linear antenna array and the following results were observed.

Table 6.2.1. Optimized Amplitude Excitation and Inter-elemental Spacing for linear
antenna array in case of different evolutionary algorithms

Algorithms Optimized Amplitude Excitations (𝑰𝒏 ) Spacing (λ)


PSO 0.3257 0.3784 0.3816 0.4838 0.6644 0.6855
0.7679 0.7424 0.8525 0.7722 0.7443 0.5815
0.6508
0.5685 0.4856 0.2793 0.4239

HSA 0.2411 0.2096 0.3575 0.4572 0.6028 0.7707


0.7319 0.8338 0.8968 0.7988 0.7022 0.6602
0.7422
0.5186 0.4370 0.3101 0.1824

SOA 0.1336 0.1834 0.2605 0.3965 0.4881 0.5662


0.6220 0.6499 0.7017 0.6706 0.5899 0.5089
0.7672
0.3907 0.3146 0.2010 0.1894

SSA 0.1539 0.2562 0.3272 0.4965 0.6299 0.7178


0.7830 0.8366 0.8218 0.7669 0.6656 0.5330
0.8028
0.3891 0.3244 0.1573 0.1202

MFO 0.1036 0.1893 0.2720 0.4184 0.5276 0.6516


0.7442 0.7873 0.7940 0.7411 0.6569 0.5584
0.8337
0.4001 0.3011 0.1777 0.1152

Table 6.2.2. Linear antenna array parameters for different evolutionary algorithms

Algorithms SLL (dB) HPBW (º) FNBW (º) Directivity (dB) DRR

PSO – 24.86 5.76 14.40 12.7552 3.0523


HSA – 29.29 5.76 14.40 13.0249 4.9167
SOA – 31.44 5.40 14.40 13.1155 5.2522
SSA – 34.20 5.40 14.76 13.1379 6.9601
MFO – 36.58 5.40 14.76 13.2206 7.6641
Difference in Radiation Pattern for linear antenna array due to optimization
using different algorithms

Figure 5: Radiation pattern of linear antenna array using PSO

Figure 6: Radiation pattern of linear antenna array using HSA


Figure 7: Radiation pattern of linear antenna array using SOA

Figure 8: Radiation pattern of linear antenna array using SSA


Figure 9: Radiation pattern of linear antenna array using MFO

Figure 10: Convergence profile of linear antenna array using different algorithms
6.2. Different evolutionary algorithms applied for time modulated antenna
array

Here we have applied the above mentioned algorithms (PSO, HSA, SOA, SSA, and MFO) for
time modulated antenna array and the following results were observed.

Table 6.2.1. Optimized Switch-ON Times and Inter-elemental Spacing for time
modulated antenna array in case of different evolutionary algorithms

Algorithms Optimized Switch-ON Times (𝝉𝒏 ) Optimized


Inter-
element
Spacing (λ)
PSO 0.2298 0.2764 0.3997 0.6331 0.7355 0.7905 0.8930
0.8988 0.9391 0.8415 0.7498 0.5477 0.4438 0.3092
0.8017
0.2365 0.0995

HSA 0.1727 0.2420 0.3678 0.4766 0.6791 0.7723 0.9053


0.9825 0.9313 0.8966 0.8509 0.6679 0.5470 0.3872
0.8065
0.2400 0.1443

SOA 0.1369 0.2558 0.3685 0.5146 0.6795 0.8114 0.8791


0.9507 0.9125 0.8691 0.7443 0.5859 0.4759 0.2938
0.8398
0.2013 0.1147

SSA 0.1127 0.2018 0.3023 0.4628 0.6057 0.7362 0.8685


0.9246 0.9384 0.8850 0.7937 0.6570 0.4831 0.3510
0.8701
0.1992 0.1318

MFO 0.0980 0.1824 0.3073 0.4623 0.6337 0.7910 0.9053


0.9967 0.9792 0.9401 0.8172 0.6654 0.5002 0.3364
0.8868
0.1989 0.1129

Table 6.2.2. Time modulated antenna array parameters for different evolutionary
algorithms

Algorithms SLL HPBW (º) FNBW Pdesired (%) Pundesired Directivity


(dB) (º) (%) (dB)
PSO – 31.49 5.40 14.40 68.0797 31.9203 12.4542
HSA – 33.85 5.40 14.76 70.1314 29.8686 12.5219
SOA – 35.93 5.40 14.76 67.2326 32.7674 12.5269
SSA – 37.81 5.40 14.76 65.9165 34.0835 12.5822
MFO – 40.32 5.40 14.76 68.5262 31.4738 12.6288
Difference in Radiation Pattern and Optimized Switch-ON Time for time
modulated antenna array due to optimization using different algorithms

Figure 11: Optimized Switch-ON Time for PSO in TMAA

Figure 12: Radiation Pattern for PSO in TMAA


Figure 13: Optimized Switch-ON Time for HSA in TMAA

Figure 14: Radiation Pattern for HSA in TMAA


Figure 15: Optimized Switch-ON Time for SOA in TMAA

Figure 16: Radiation Pattern for SOA in TMAA


Figure 17: Optimized Switch-ON Time for SSA in TMAA

Figure 18: Radiation Pattern for SSA in TMAA


Figure 19: Optimized Switch-ON Time for MFO in TMAA

Figure 20: Radiation Pattern for MFO in TMAA


Figure 21: Convergence profile of TMAA using different algorithms

Figure 22: Power Distribution in main lobe and side lobe


07 Conclusion

For many years, antenna designers have relied extensively on optimization tactics. In open
literature, the details of the techniques and optimization configurations are frequently left out
of the designs. The goal of this project is to present the detailed operative process of five of
the mostly used algorithms, and compare their performances in case of linear and time
modulated antenna array. The optimization techniques are in a chronological order based on
the time they were invented and used. Starting from PSO, which is the oldest, algorithms like
MFO has also been discussed, which is fairly advanced. Antenna arrays are used to increase
the gain of the transmitted signal. Inter-elemental spacing is a major aspect to take notice of
when designing such arrays. Even though the gain is increased, so is the interference.
Observing the radiation patterns that show comparison between fundamental pattern and the
harmonics, we can see that the optimization techniques result a reduction in the side lobe
level and reduces the interference, keeping the below -30db. These optimization techniques
vary in the process of their working but ultimately works on improvement of performance of
the antenna. Talking about the algorithms, the way of their working has been picked up from
certain phenomena of the nature that can also be solutions to other problems. These algorithms
have effect on the parameters of the antenna array and it is noticeable that even though the
older algorithms result in significant improvement of the antenna array performance, the newer
ones have even better outcomes. Optimizing linear antenna array we get optimized amplitude
excitation values which keeps decreasing starting from the oldest to the newest algorithms.
Which means lesser time is required to get better results. It also shows a significant
improvement in side lobe level reduction and parameters like directivity, half power beam
width, first null beam width and dynamic range ratio. In case of time modulated antenna array,
the optimized Switch-ON time decreased and improved the ration of power dissipation is
desired and undesired direction. To sum it all up, it can be said that optimization algorithms
have experienced improvement in their performance over time and that has a significant effect
on how the antenna arrays are implemented to their fullest potential. Observing the
convergence graphs tells us how the newer algorithms require lesser iterations in most cases
than the older ones. It can also be observed that even though the traditional algorithm like PSO,
DE is more than sufficient for providing optimal results, the newer and advanced ones like
Moth Flame Optimization (MFO) excel in showing better performance at a faster rate.
08 References

1. C. A. Balanis, Antenna Theory: analysis and design, Chapter 1: Antennas, 3rd edition,
Wiley Interscience, 2005, pp. 1-24.

2. E. Cuevas, M. Cienfuegos, D. Zaldívar, and M. Pérez-Cisneros, “A swarm


optimization algorithm inspired in the behavior of the social-spider,” Expert Systems
with Applications, vol. 40, no. 16, pp. 6374–6384, 2013.
3. L. A. M. Pereira, D. Rodrigues, P. B. Ribeiro, J. P. Papa, and S. A. T. Weber, “Social-
spider optimization-based artificial neural networks training and its applications for
Parkinson's disease identification,” in Proceedings of the 27th IEEE International
Symposium on Computer-Based Medical Systems (CBMS '14), pp. 14–17, IEEE, New
York, NY, USA, May 2014.
4. S. Z. Mirjalili, S. Saremi, and S. M. Mirjalili, “Designing evolutionary feedforward
neural networks using social spider optimization algorithm,” Neural Computing and
Applications, vol. 26, no. 8, pp. 1919–1928, 2015.

5. D. R. Pereira, M. A. Pazoti, L. A. Pereira, and J. P. Papa, “A social-spider optimization


approach for support vector machines parameters tuning,” in Proceedings of the 2014
IEEE Symposium On Swarm Intelligence (SIS), pp. 1–6, Orlando, FL, USA, December
2014.

6. D. R. Pereira, M. A. Pazoti, L. A. Pereira et al., “Social-Spider Optimization-based


Support Vector Machines applied for energy theft detection,” Computers & Electrical
Engineering, vol. 49, pp. 25–38, 2016.

7. E. Cuevas, A. Luque, D. Zaldívar, and M. Pérez-Cisneros, “Evolutionary calibration of


fractional fuzzy controllers,” Applied Intelligence, vol. 47, no. 2, pp. 291–303, 2017.

8. H. Shayeghi, A. Molaee, and A. Ghasemi, “Optimal design of fopid controller for LFC
in an interconnected multi-source power system,” International Journal on “Technical
and Physical Problems of Engineering”, pp. 36–44, 2016.

9. H. Shayeghi, A. Molaee, K. Valipour, and A. Ghasemi, “Multi-source power system


FOPID based Load Frequency Control with high-penetration of Distributed
Generations,” in Proceedings of the 21st Electrical Power Distribution Network
Conference, EPDC 2016, pp. 131–136, Iran, April 2016.

10. A. A. El-Fergany and M. A. El-Hameed, “Efficient frequency controllers for


autonomous two-area hybrid microgrid system using social-spider optimiser,” IET
Generation, Transmission & Distribution, vol. 11, no. 3, pp. 637–648, 2017.

11. S. Ouadfel and A. Taleb-Ahmed, “Social spiders optimization and flower pollination
algorithm for multilevel image thresholding: a performance study,” Expert Systems
with Applications, vol. 55, pp. 566–584, 2016.

12. L. Maurya, P. K. Mahapatra, and A. Kumar, “A social spider optimized image fusion
approach for contrast enhancement and brightness preservation,” Applied Soft
Computing, vol. 52, pp. 575–592, 2017.

13. E. Cuevas, V. Osuna, and D. Oliva, Evolutionary Computation Techniques: A


Comparative Perspective, vol. 686, Springer, 2017.

14. A. Laddha, A. Hazra, and M. Basu, “Optimal operation of distributed renewable energy
resources based micro-grid by using Social Spider Optimization,” in Proceedings of the
IEEE Power, Communication and Information Technology Conference, PCITC 2015,
pp. 756–761, India, October 2015.

15. Z. Hejrati, S. Fattahi, and I. Faraji, “Optimal congestion management using the Social
Spider Optimization algorithm,” in 29th International Power System Conference, Iran,
2014.

16. D. Vijay and V. Priya, “Anti-Islanding Protection of Distributed Generation Based on


Social Spider Optimization Technique,” International Journal of Advanced
Engineering Research and Science, vol. 4, no. 6, pp. 32–40, 2017.

17. Mirjalili, Seyedali. “Moth-flame optimization algorithm: A novel nature-inspired


heuristic paradigm.” Knowledge-Based Systems 89 (2015):228-249.
18. Gaston, Kevin J., et al. "The ecological impacts of nighttime light pollution: a
mechanistic appraisal." Biological reviews 88.4 (2013): 912-927.
19. S. A. Schelkunoff, “A Mathematical Theory of Linear Arrays”, Bell System Technical
Journal, vol. 22, Jan. 1943, pp. 80-87.
20. D. R. Rhodes, “On a fundamental principle in the theory of planar antennas”,
Proceeding of the IEEE, vol. 52, no. 9, Sep. 1964, pp. 1013 – 1021.
21. H. P. Neff and J. D. Tillman, “An omnidirectional circular antenna array excited
parasitically by a central driven element”, Transactions of the American Institute of
Electrical Engineers, Part I: Communication and Electronics, vol. 79, no. 2, May 1960,
pp. 190-192.
22. A. Recioui, “Sidelobe level reduction in linear array pattern synthesis using particle
swarm optimization,” Journal of Optimization Theory and Applications, vol. 153, no.
2, pp. 497–512, 2012.
23. M. M. Khodier and C. G. Christodoulou, “Linear array geometry synthesis with
minimum sidelobe level and null control using particle swarm optimization,” IEEE
Transactions on Antennas and Propagation, vol. 53, no. 8, pp. 2674–2679, 2005.
24. Y. Cengiz and H. Tokat, “Linear antenna array design with use of genetic, memetic and
tabu search optimization algorithms,” Progress in Electromagnetics Research C, vol. 1,
pp. 63–72, 2008.
25. U. Singh, H. Kumar, and T. S. Kamal, “Linear array synthesis using biogeography
based optimization,” Progress in Electromagnetics Research M, vol. 11, pp. 25–36,
2010.
26. E. Rajo-lglesias and O. Quevedo-Teruel, “Linear array synthesis using an ant-colony-
optimization-based algorithm,” IEEE Antennas and Propagation Magazine, vol. 49, no.
2, pp. 70–79, 2007.
27. K. Guney and M. Onay, “Optimal synthesis of linear antenna arrays using a harmony
search algorithm,” Expert Systems with Applications, vol. 38, no. 12, pp. 15455–
15462, 2011.

28. Dib N, Goudos S, Muhsen H. Application of Taguchi's optimization method and self-
adaptive differential evolution to the synthesis of linear antenna arrays. Prog
Electromagn Res. 2010; 102:159‐180.
29. U. Singh and M. Rattan, “Design of linear and circular antenna arrays using cuckoo
optimization algorithm,” PIER C, vol. 46, pp. 1–11, 2014.
30. S. K. Goudos, V. Moysiadou, T. Samaras, K. Siakavara, and J. N. Sahalos, “Application
of a comprehensive learning particle swarm optimizer to unequally spaced linear array
synthesis with sidelobe level suppression and null control,” IEEE Antennas and
Wireless Propagation Letters, vol. 9, pp. 125–129, 2010.
31. Singh, Urvinder & Salgotra, Rohit. (2017). Pattern Synthesis of Linear Antenna Arrays
Using Enhanced Flower Pollination Algorithm. International Journal of Antennas and
Propagation. 2017. 1-11. 10.1155/2017/7158752.
32. Das A, Mandal D, Ghoshal SP, Kar R. Moth flame optimization based design of linear
and circular antenna array for side lobe reduction. Int J Numer Model. 2018; e2486.
33. C. A. Balanis, Antenna Theory: analysis and design, Chapter 6: Arrays: Linear, Planar
and Circular, 3rd edition, Wiley Interscience, 2005, pp. 283-371.
34. Mohammad Asif Zaman, Md. Abdul Matin, "Nonuniformly Spaced Linear Antenna
Array Design Using Firefly Algorithm", International Journal of Microwave Science
and Technology, vol. 2012, Article ID 256759, 8 pages, 2012.
35. T. B. Chen, Y. B. Chen, Y. C. Jiao, and F. S. Zhang, “Synthesis of antenna array using
particle swarm optimization,” in Proceedings of the Asia-Pacific Microwave
Conference (APMC '05), vol. 3, p. 4, December 2005.
36. A. Recioui, A. Azrar, H. Bentarzi, M. Dehmas, and M. Chalal, “Synthesis of linear
arrays with sidelobe reduction constraint using genetic algorithm,” International
Journal of Microwave and Optical Technology, vol. 3, no. 5, pp. 524–530, 2008.
37. Bach, H. (1970). Directivity of basic linear arrays. IEEE Transactions on Antennas and
Propagation, 18(1), 107-110.
38. Schrank H. Low sidelobe phased array antennas. IEEE Antennas and Propagation
Society Newsletters. April 1983;25(2):4-9.
39. H. E. Shanks and R. W. Bickmore, “Four-dimensional electromagnetic radiators”,
Canadian Journal of Physics, vol. 37, 1959, pp. 263-275.
40. W. H. Kummer, A. T. Villeneuve, T. S. Fong, and F. G. Terrio, “Ultra-low sidelobes
from time-modulated arrays”, IEEE Transactions on Antenna and Propagation, vol. 11,
no. 6, Nov., 1963, pp. 633-639.
41. W. H. Kummer, A. T. Villeneuve and F. G. Terrio, “New antenna idea - Scanning
without Phase Shifters,” Electronics, vol. 36, March, 1963, pp. 27-32.
42. B. L. Lewis and J. B. Evins, “A new technique for reducing radar response to signals
entering antenna sidelobes”, IEEE Transactions on Antennas and Propagation, vol. 31,
no. 6, Nov.1983, pp. 993-996.
43. S. Yang, Y. B. Gan and A. Qing, “Sideband suppression in time-modulated linear arrays
by the differential evolution algorithm”, IEEE Antennas Wireless Propagation Letters,
vol. 1, no. 9, 2002, pp. 173-175.
44. S. Yang, Y. B. Gan and P. K. Tan, “A new technique for power-pattern synthesis in
timemodulated linear arrays”, IEEE Antenna Wireless Propagation Letters, vol. 2, no.
20, 2003, pp. 285-287.
45. J. Fondevila, J. C. Bregains, F. Ares and E. Moreno, “Optimising uniformly excited
linear arrays through time modulation”, IEEE Antennas Wireless Propagation Letters,
vol. 3, pp. 298-300, 2004.
46. S. Yang, Y. B. Gan, A. Qing, and P. K. Tan, “Design of uniform amplitude time
modulated linear array with optimized time sequences”, IEEE Transactions on Antenna
and Propagation, vol. 53, no. 7, July, 2005, pp. 2337-2339.
47. A. Tennant and B. Chambers, “Control of the harmonic radiation patterns of
timemodulated antenna arrays,” IEEE Antenna and Propagation conference, July, 2008,
pp. 1-4.
48. X. Zhu, S. Yang and Z. Nie, “Full-Wave Simulation of Time Modulated Linear
Antenna Arrays in Frequency Domain,” IEEE Transactions on Antennas and
Propagation, vol. 56, no. 5, May, 2008, pp. 1479-1482.
49. L. Manica, P. Rocca, L. Poli, A. Massa, “Almost time-independent performance in
timemodulated linear arrays,” IEEE Antennas Wireless Propagation Letter, vol. 8,
Aug., 2009, pp. 843-846.
50. P. Rocca, L. Manica, L. Poli and A. Massa, “Synthesis of compromise sum-difference
arrays through time-modulation,” IET Radar Sonar Navigation, vol. 3, Nov., 2009, pp.
630-637.
51. G. Li, S. Yang, Y. Chen, and Z. Nie, “A novel electronic beam steering technique in
time modulated antenna array,” Progress In Electromagnetics Research, vol. 97, 2009,
pp. 391-405.
52. P. Rocca, L. Poli, L. Manica and A. Massa, “Compromise pattern synthesis by means
of optimized time-modulated array solutions,” IEEE Antennas and Propagation Society
International Symposium, June, 2009, pp. 1 - 4.
53. X. Huang, S. Yang, G. Li and Z. Nie, “Power-pattern synthesis in time modulated
semicircular arrays,” IEEE Antennas and Propagation Society International
Symposium, June, 2009, pp. 1 - 4.
54. L. Poli, L. Manica, P. Rocca and A. Massa, “Subarrayed time-modulated arrays with
minimum power losses,” IEEE Antennas and Propagation Society International
Symposium, July, 2010, pp. 1 - 4.
55. L. Poli, P. Rocca, L. Manica and A. Massa, “Pattern synthesis in time-modulated linear
arrays through pulse shifting,” IET Microwaves, Antennas & Propagation, vol. 4, no.
9, Sep., 2010, pp. 1157-1164.
56. A. Basak, S. Pal, S. Das, A. Abraham and V. Snasel, “A modified Invasive Weed
Optimization algorithm for time-modulated linear antenna array synthesis,” IEEE
Congress on Evolutionary Computation, July, 2010, pp. 1- 8.
57. E. Aksoy and E. Afacan, “Thinned Nonuniform Amplitude Time-Modulated Linear
Arrays,” IEEE Antennas Wireless Propagation Letter, vol. 9, May, 2010, pp. 514 -517.
58. G.R., Hardel, N.T., Yallaparagada, D., Mandal and A.K., Bhattacharjee; “Introducing
deeper nulls for time modulated linear symmetric antenna array using real coded
genetic algorithm”, 2011 IEEE Symposium on Computers & Informatics (ISCI), 20-23
March 2011, pp. 249-254.
59. M. D’Urso, A. Iacono, A. Iodice and G. Franceschetti, “Optimizing uniformly excited
time-modulated linear arrays”, 2011 Proceedings of the Fifth European Conference on
Antennas and Propagation (EuCAP), Rome, Italy, 11 – 15 April, 2011, pp. 2082-2086.
60. S.K., Mandal , R. Ghatak and G.K., Mahanti, “Minimization of side lobe level and side
band radiation of a uniformly excited time modulated linear antenna array by using
Artificial Bee Colony algorithm”, 2011 IEEE Symposium on Industrial Electronics and
Applications (ISIEA), 25-28 Sept. 2011, pp. 247-250.
61. G. Li, S. Yang and Z. Nie, “Adaptive beamforming in time modulated antenna arrays
based on beamspace data,” Asia Pacific Microwave Conference, Singapore, Dec., 2009,
pp. 743-736.
62. G. Li, S. Yang, and Z. Nie, “A study on the application of time modulated antenna
arrays to airborne pulsed Doppler radar,” IEEE Transactions on Antennas and
Propagation, vol. 57, no. 5, May, 2009, pp. 1578–1582.
63. A. Tennant and B. Chambers, “Time-switched array analysis of phase switched
screens,” IEEE Transactions on Antennas and Propagation, vol. 57, no. 3, Mar., 2009,
pp. 808–812.
64. A. Tennant and B. Chambers, “A two-element time-modulated array with
directionfinding properties”, IEEE Antennas Wireless Propagation Letter, vol. 6, 2007,
pp. 64–65.
65. G. Li, S. Yang, Y. Chen, and Z. Nie, “Direction of arrival estimation in time modulated
linear arrays,” IEEE Antennas and Propagation Society International Symposium, June,
2009, pp. 1-4.
66. A. Tennant, “ Experimental two-element time-modulated direction finding array, ”
IEEE Transactions on Antennas and Propagation, vol. 58, no. 3, Mar., 2010, pp. 986-
988.
67. G. Li, S. Yang and Z. Nie, "Direction of arrival estimation in time modulated linear
arrays with unidirectional phase center motion, ” IEEE Transactions on Antennas and
Propagation, vol. 58, no. 4, April, 2010, pp. 1105-1111.
68. Y. Tong and A Tennant, “Simultaneous control of sidelobe level and harmonic beam
steering in time-modulated linear arrays,” IET Electronics Letters, vol. 46, no.3, Feb.,
2010, pp. 200-202.
69. Y. Tong and A. Tennant, “A Wireless Communication System Based on a
TimeModulated Array”, IEEE Loughborough Antennas and Propagation Conference,
Loughborough, UK, 8 – 9 Nov., 2010, pp. 245 – 248.
70. L. Poli, P. Rocca, G. Oliveri and A. Massa, “Adaptive nulling in time-modulated linear
arrays with minimum power losses,” IET Microwaves, Antennas & Propagation , vol.
5, no. 2, Jan., 2011, pp. 157-166.
71. R. L., Haupt, “Time-modulated receive arrays”, 2011 IEEE International Symposium
on Antennas and Propagation (APSURSI) , 3-8 July 2011, pp. 968-971.
72. G., Oliveri, “Smart antennas design exploiting time-modulation”, 2011 IEEE
International Symposium on Antennas and Propagation (APSURSI), 3-8 July 2011, pp.
2833-2836.
73. Y. Tong and A. Tennant, “A Two-Channel Time Modulated Linear Array with
Adaptive Beamforming”, IEEE Transactions on Antenna and Propagation, vol. 60, no.
1, January, 2012, pp. 141-147.
74. S. Yang, Y. B. Gan, and P. K. Tan, ‘‘Comparative study of low sidelobe time modulated
linear arrays with different time schemes,’’ J. Electromagn. Waves Appl., vol. 18, no.
11, pp. 1443–1458, Jan. 2004.

You might also like