You are on page 1of 5

Broadband Microstrip Bow Tie Patch Antenna Parameter Optimization Using Soft Computations.

Alok Pandey #1, Swati Gaur *2, Kirti Vyas#3

Department of E & C, SIIT, Jaipur #1,* 2, ACEIT Kukas#3 Lakhna Road via Sanganer Bazar, Jaipur#1,* 2 Kukas Industrial Area, Jaipur#3

Abstract- Adaptive neural networks (ANN) are electronic systems which can be trained to remember behavior of a modeled structure in given operational points, and which can be used to approximate behavior of the structure out of the training points. These approximation abilities of neural nets are demonstrated on modeling a Bowtie microstrip antenna. Parameters are turned to the accuracy and to the efficiency of neural models. The neural models algorithms in microstrip antenna design is discussed in this paper. The ANN model integrates adaptable fuzzy inputs with a modular neural network to rapidly and accurately approximate complex functions. Index Terms- Adaptive neural network, microstrip antennas, modeling, optimization

brains. Although the precise operation details of artificial neural networks are quite different from human brains, they are similar in three aspects: they consist of a very large

number of processing elements each neuron connects to a large number of other neurons, and the functionality of a network is determined by modifying the strengths of connections during a learning phase. The purpose of this article is to provide an overview of recent developments in the design and analysis of microstrip antenna using neural network and this is illustrated by taking an illustrative example of microstrip antenna design using ANN. 2. CANFIS ARCHITECTURE

1. INTRODUCTION Microstrip antennas due to their many attractive features have drawn popularity among researchers and for industrial application. In high-performance spacecraft, aircraft, airship, aerostat, missile, and satellite applications, where size, weight, cost, performance, ease of installation, and aerodynamic profiles are constraints, low profile antennas may be required. Presently, there are many government and commercial applications such as mobile, radio, and wireless communications that have similar specifications. To meet these requirements, microstrip antennas can be used [13]. In the literature, artificial neural network (ANN) models have been built for the design and analysis of microstrip antennas in rectangular form. In this work, the width and the length for designing of rectangular microstrip antennas are obtained by using new method based on artificial neural networks. The artificial neural networks are developed from neurophysiology by morphologically and computationally mimicking human

Fig.1 CANFIS Architecture

Figure.1 shows the adaptive neuro fuzzy inference system. Fuzzy a rule-based systems and artificial neural networks originated from different philosophies and were originally considered independent of each other. Later studies revealed that they actually have a close relation. Buckley et al. [1] discussed the functional equivalence between neural networks and fuzzy expert systems. The integration of fuzzy logic and neural networks has given birth to an emerging technology field, fuzzy neural networks. The theory of fuzzy logic provides a mathematical strength to capture the uncertainties associated with human cognitive processes, such as thinking and reasoning, also it provides a

mathematical morphology to emulate certain perceptual and linguistic attributes associated with human cognition. While fuzzy theory provides an inference mechanism under cognitive uncertainty, computational neural networks offer exciting advantages such as learning, adaptation, fault-tolerance, parallelism and generalization. The computational neural networks are capable of coping with computational complexity, nonlinearity and uncertainty. It is interesting to note that fuzzy logic is another powerful tool for modeling uncertainties associated with human cognition, thinking and perception [2,3]. Many authors have proposed various neuro-fuzzy models as well as complex training algorithms. Of these, Jang [4] proposed the famous neuro-fuzzy model ANFIS (adaptive network-based fuzzy inference system), which has been successfully applied in various fields. In ANFIS, the hybrid learning algorithm is adopted that integrates the BP successfully applied in various fields. In ANFIS, the hybrid learning algorithm is adopted that integrates the BP(Backward Propagation) algorithm with the recursive least squares algorithm to adjust parameters. ANFIS was later extended to the coactive ANFIS in [5] and to the generalized ANFIS in [6]. Horikawa et al. [7] proposed a neuro-fuzzy model using sigmoid functions to generate the bell-shape input membership functions and trained it with the BP algorithm. However, some practical difficulties associated with gradient descent are slow convergence and ineffectiveness at finding a good solution [8]. The Proposed Neuro-Fuzzy Model The proposed neurofuzzy model is a multilayer neural network-based fuzzy system and the system has a total of five layers. In this connectionist structure, the input and output nodes represent the input states and output response, respectively, and in the hidden layers, there are nodes functioning as membership functions (MFs) and rules. This eliminates the disadvantage of a normal feed forward multilayer network, which is difficult for an observer to understand or to modify. Throughout the simulation examples presented in this paper, all the MFs used are bell-shaped (Gaussian) functions defined as: (x) = exp - ((x- c) / )
2 2

model's structure and functionalities, and the philosophy behind this architecture are given below.


Input Layer

Nodes in this layer are input nodes that represent input linguistic variables as crisp values. The nodes in this layer only transmit input values to the next layer, the membership function layer. Each node is connected to only those nodes of layer 2, which represent the linguistic values of corresponding linguistic variables. B.Fuzzy Input Layer Nodes in this layer act as membership functions to represent the terms of the respective linguistic variables. The input values are fed to fuzzy input layer that calculates the membership degrees. This is implemented using Gaussian membership functions with two parameters, mean (or centre, c ) and variance (or width, ).This layer implements fuzzification for the inputs. It represents fuzzy quantization of input variables. yt( FI) = exp - ( ((x(I ) - c)2 / 2 ) C. Rule Nodes Layer The third layer contains rule nodes that evolve through learning. Evolving means all nodes on the third layer are created during learning. The rule nodes represent prototypes of input-output data associations that can be graphically represented as associations of hyper-spheres from the fuzzy input and the fuzzy output space. Hence, the functions of the layer are yt( R) = min yi( F) ; with I is the element of It where I t is the set of indices of the nodes in fuzzy layer that are connected to node t in Rule layer and yi( F) is the output of node i in Fuzzy input layer. D. Fuzzy Output Layer The fourth layer is fuzzy output layer where each node represents fuzzy quantization of the output variables. The activation of the node represents the degree to which this membership function is supported by all fuzzy rules together.

A Gaussian membership function is determined by c and : c represents the centre of the MF; and determines the width of the MF. A detailed description of the components of the

The connection weights wkj of the links connecting nodes k in fuzzy output layer to nodes j in rule nodes layer represent conceptually the CFs of the corresponding fuzzy rules when inferring fuzzy output values. yt( R) = min (yi(R) wik(F)) where Ik is the set of indices of the nodes in Rule layer that are connected to the node k in Fuzzy output

Fig. 2 Geometry of a broadband bow-tie microstrip antenna with integrated reactive loading.


Layer. 5. Output Layer Experiments on the proposed antenna with various flare angles were conducted. The results are shown in Table.1. dp output and d output are the optimized value of dp and d. and the optimization has been done by artificial neural networks. 4. RESULTS

This represents the output variables of the system. These nodes and the links attached to them act as a defuzzifier. A node in this layer computes a crisp output signal. The output variable layer makes the defuzzification for fuzzy output variables. The input--output relationship of the units in each layer are defined by the following equations:

yt =

k k

FO , R tk

y tk ctk
FO k

FO , R tk

y k
FO k

Where ctk and tk are respectively, the centroid and width of the membership function of the output linguistic value represented by k in Fuzzy output layer. 3. SIMULATION MODEL Figure.1 shows the geometry of a probe-fed bow-tie microstrip antenna with an inserted integrated reactive loading for bandwidth enhancement. The dimensions of the bow-tie patch with integrated reactive loading are given in the figure.2 The bow-tie patch has a flare angle and a width 24.9 mm, and the length between its two radiating edges is fixed to be 37.3 mm. To integrate the cascaded microstrip-line sections to provide reactive loading, a rectangular notch of dimensions. The cascaded microstrip line sections consist of three sections of different dimensions. For the proposed bow-tie patch antenna with a specific flare angle, there exists an optimal value of d for the integrated reactive loading to be resonant at the fundamental resonant frequency f10 of the bow-tie patch. In this case, the fundamental resonant modes can be split into two near-degenerate resonant modes, which makes bandwidth enhancement of the bow-tie patch antenna possible. A single probe feed is located along the centerline of the bow-tie patch at a distance of dp from the patch center. d RL is the total length of the integrated reactive loading.

Structures of the ANN The ANN network, which has a configuration of 3 input neurons and 2 output neurons with learning rate = 0.1, goal = 0.001, was trained for 400 epochs. The membership function is Gaussian and fuzzy model is tsukamoto. Sigmoid axiom with learning rule momentum is used in training. ANN network are shown in Table 1. ANN is trained with 45 samples and tested with 21 samples determined according to the definition of the problem; 3 inputs and 2 outputs were used for the analysis of ANN. The results of the analysis ANN for an isotropic material and comparison with the targets are given in Table 1 and. Simulation has done on Neurosolution 6 and CST software. REFERENCE
[1] C.A. Balanis, Antenna theory, 3rd Edition, John Wiley and Sons, Hoboken, NJ, 1997. [2] I.J. Bahl and P. Bhartia, Microstrip antennas, Artech House, Dedham, MA, 1980 [3] D.M. Pozar, Microstrip antennas, Proc IEEE 80 (1992), 79 81. [4] J.J. Buckley, Y. Hayashi, Fuzzy neural networks: a survey, Fuzzy Sets and Systems 66 pp 1-13 1994 [5] S.Chokri, T Abdelwahed, Neural Network for Modeling: A New Approach. Springer-Verlag Berlin Lecture Note in Computing Science. 2659 pp 159-168 2003. [6] H. Takagi, Fusion techniques of fuzzy systems and neural networks, and fuzzy systems and genetic algorithms, SPIE 2061 pp 402-413 1995. [7] J.-S.Jang,ANFIS: adaptive-network-based fuzzy inference system, IEEE Trans, Cybernet.23 pp 665685 1993. [8] Eiji Mizutani,J.-S.Jang,Coactive neural fuzzy modeling, in:Proc.of IEEE Internat. Conf. on Neural Networks, Vol.2,Perth, Australia, 760 765 1995.

[9] M.F.Azeem,et al., Generalization of adaptive neuro-fuzzy inference systems, IEEE Trans.Neural Networks 11 pp 1332 1346 2000. [10] Shin-ichi Horikawa,et al., On fuzzy modeling using fuzzy neural networks with the back-propagation algorithm, IEEE Trans.Neural Networks 3 pp 801 806 1992. [11] G. Puskorius, L. Feldkamp, Neurocontrol of nonlinear dynamical systems with Kalman filter trained recurrent networks, IEEE Trans. Neural Networks 5 pp 279297 1994. [12] L. Zadeh. Fuzzy Sets, Inf. Control, vol 8 pp 338-353 1965. [13] Mamdani, E. H. and S. Assilian. ,"An experiment in linguistic synthesis with a fuzzy logic controller." Int. J. Man-Machine Studies 7 1975.

Table: 1
RESULTS OF THE ANN ALGORITHM AND COMPARISON WITH THE TARGETS Theta 0 10 20 30 40 dRL/lamda 0.17 0.168 0.173 0.169 0.168 f 1952 1880 1829 1744 1645 dp 9.6 10.6 10.4 9 8.7 d 2.8 3.1 3.9 4.2 4.9 dp (Output) 9.600005375 10.59997057 10.39999198 8.999982198 8.699994046 d (Output)) 2.80002595 3.100010207 3.900013193 4.1999925 4.899970526

Fig. 3 Graphical representation of result of Table-1