European Journal of Scientific Research ISSN 1450-216X Vol.27 No.2 (2009), pp.199-216 © EuroJournals Publishing, Inc.
Design and Analog VLSI Implementation of Neural Network Architecture for Signal Processing
Cyril Prasanna Raj P VLSI System Design Centre, MSR School of Advanced Studies, Bangalore E-mail: email@example.com Tel: +91-80-23605539; Fax: +91-80-23601983 S.L. Pinjare VLSI System Design Centre, MSR School of Advanced Studies, Bangalore E-mail: firstname.lastname@example.org Tel: +91-80-23605539; Fax: +91-80-23601983 Abstract Biological systems process the analog signals like image and sound efficiently. To process the information the way biological systems do, we make use of Artificial Neural Networks(ANN). The focus of this paper is the implementation of the Neural Network Architecture(NNA) with on chip learning in analog VLSI for generic signal processing applications. The artificial neural network architecture comprises of analog components like multipliers and addders along with the tan-sigmoid function circuit. The proposed neural architecture is trained using Back Propagation (BP) algorithm in the analog domain. New techniaues for weight storage with refresh is proposed. The neural architecture is thus a complete analog structure. The multiplier block is implemented using gilbert cell, the tansig function is realized using MOS transistor. The functionality of the designed neural architecture is verified for analog operation like amplification and frequency multiplication. The netowk designed is adopted for image compression in analog domain. The output level swings achieved for the designed neural architecture are ±2.8 Vpp max for ± 3 V voltage supply. The circuit converged for 10 MHz signal within 200 ns. Neural architecture is also verified for Digital operations like AND, OR, NOT and XOR. The network realizes its functionality for the trained targets, which is verified using simulation results. The network designed is extended for image compression in analog domain. 50% image compression is achieved using the proposed neural network architecture. Layout design and verification of the proposed design is carried out using cadence virtuoso and synopsys hspice. The chip dimensions are 150μm2. Keywords: Neural Architecture (NA), Back Propagation Algorithm (BPA), Neural Network
When we speak of intelligence it is actually acquired, learned from the past experiences. This intelligence though a biological word, is realized based on the mathematical equations, giving rise to the science of Artificial Intelligence (AI). To implement this intelligence artificial neurons are used.
The bias is optional and user defined. can be classified in terms of there implementation into three categories. as shown in the figure 2. Training of the network to realize functionality is achieved through a known input and known target for the input. The NA is also used for the image compression and decompression considering a small size input pixel intensity matrix. the initial weights and bias of the network is assumed some constants. The neural architecture (NA) designed in this paper is a 2 input 1 output neuron with three hidden layer neurons. To design and implemented a neural architecture in analog VLSI equation 1 and 2 are to be realized using analog circuits. The neuron selected in this paper comprises of multiplier and adder along with the tan-sigmoid function . This neural architecture is thus a complete analog structure. Figure 1 can be expressed mathematically as n = i1* w1 + i 2 * w2 (1) a = tan sig (n + bias) (2) where a is the output of the neuron and n is the intermediate output for the inputs i and neuron weights w. Analog or Hybrid. This learning capability or learning rules are based on the mathematical algorithms representing specific applications.Design and Analog VLSI Implementation of Neural Network Architecture for Signal Processing
These artificial neurons. It is this implementation of the neuron and leaning algorithm that makes it Digital. The NA is feed forward network and the learning algorithm used is back propagation realized in the analog domain. This neural architecture can be used for many of the analog signal processing activity. in this paper are realized by Analog components like multipliers. OR. XOR and NOT gate are implemented using proposed neural architecture. amplification and frequency multiplication capability of the neural network is also proven. The analog operations like sine wave learning. adders. The neuron designed in this paper has learning capabilities for both digital and analog application. Analog implementation of the neural network reduces circuit complexity and also avoids the conversion of real time analog samples to digital signal. The learning algorithms compute the differences of the neuron output obtained as per equation 1 and 2 with the target. Multiple Layers of Neurons
The set of single layer neurons connected with each other is called the multiple layer neurons.
2. This error is back propagated to update the weight and the bias elements.
Figure 1: 2 input to 1 output Neuron
2 Multipliers Input 1 Weight 1 Input 2 Tansig Function Weight 1 Adder Output 1
The neural networks shown Figure 1. The back propagation training algorithm is performed in the analog domain. Digital. Analog or Hybrid. differentiators and memories. This process is continued until the error reaches a permissible set limit. To validate the digital learning capabilities of the NA. logic functions like AND. The implementation of the neural architecture (NA) in all these categories requires learning capability to be integrated in the design. The focus of this paper is to implement the neural architecture with back propagation learning/training algorithm for data compression.
Back Propagation Algorithm The essence of the neural networks lies in the way the weights are identified and used in the network through a definite algorithm to realize functionality. The ∂ for the hidden layer is calculated as
∂hiddenlaye = d(a1 )∑wij∂i r i
Weight update for the hidden layer with new. will be done using equation 5. The weight update is given by Δw = η ∂ a1 (6) ij i i Where a1i is the output of the hidden layer or input to the output neuron and η is the learning rate . 2. The final output is a21. Analog Components for Neural Architecture
The inputs to the neuron as shown in figure 2 are multiplied by the weight matrix.
3. The actual output of the layer is given by ai. the resultant output is summed up and is passed though an neuron activation function (NAF). The multiplier block. The outputs of the hidden layer are connected to the output layer through weights w21 to w23.1. How the output unit affects the error in the ith layer is given by differentiating equation 3 by ai ∂E (4) = ( a 2i − d i ) ∂ai The equation 4 can be written in the other form as
∂i = (a2 − di )d (a2 ) i i
Where d(ai) is the differentiation of the ai.201
Cyril Prasanna Raj P and S. Pinjare
Figure 2: Multiple Layers Neural Network
Two inputs v1 and v2 are connected to the neurons in the hidden layer through weights w11 to w16. adder
. Thus the error or cost function is given by 1 (3) E = (a 2 − d i)2 i 2 This process of computing the error is called a forward pass. This error has to propagate backwards from the output to the input. The output obtained from the activation function is taken through the next layer for further processing.L. Equations 3-7 depend on the number of the neurons present in the layer and the number of layers present in the network. The target is represented as di (desired output) for the ith output unit see figure 2. In this paper Back Propagation (BP) algorithm is adopted and implemented  in the supervised learning method.
So in equation 8 the term
I ds =I e o q[Vg − nVs ] / nKT
Figure 3: Gilbert cell schematic
The Gilbert cell works in the subthreshold region.Design and Analog VLSI Implementation of Neural Network Architecture for Signal Processing
block and the activation function model the artificial neural network. the current IDS is independent of Vds (saturates) when other words Vds is equal to 100mV. Multiplier Block (mult) The Gilbert cell is used as the multiplier block. n = 1. NAF block with derivative 3. Vs and Vd are taken with respect to the bulk voltage Vb.4.5 (W/L)7-8=4 (W/L)3. (W/L) 1-2=120.1. Ids as 2 mA.2 to 1. KT/q= 25mA at room temperature.6 slope factor. Blocks to be used are as follows 1. Multiplication block 2.5. The current for the NMOS transistor to work in the sub threshold region is given by the following equations 
q[V g − nVs ] nKT I =I e ds o qV − ds (1 − e nKT )
where all the voltages Vg. Adders 3.
Equation 11 is the saturation current in the subthreshold region as it reveals that the Ids is theoretically independent of the Vds. The NAF block provides a derivative function of the output to implement equation 5. The schematic of the Gilbert cell is as shown in the figure 3.6 =2.
2 ⎛ KT ⎞ W ⎟ ⎜ I = 2nμC ⎜ ⎟ L 0 ox ⎝ q ⎠
q[ −V g + nV s ] nKT
V − ( t 0 )q nKT e
The current equation for PMOS is same as 8 but all the voltages have opposite signs
I =I e ds o qVds (1 − e nKT )
qV ds / KT ≥ 4
Considering equation 8. Now considering each transistor in the saturation region one can design circuit for the Gilbert cell multiplier. or in (11)
− qVds / KT
is approximately zero. (W/L)9=44.
M3-4 = 2 (W/L)1-2 =1.3. act as adder itself.3. Modified differential amplifier as NAF with differentiation output. Thus this can also be used as a multiplier when one input is current and the other is voltage. The NAF function can be derived from the same differential pair configuration.1 Differential Amplifier Design As A Neuron Activation Function (Tan) This block is named as tan in the final schematics for Neural Architecture.2. consider a simple differential pair shown in the figure 4
Figure 4: Simple differential amplifier
Now the currents in the subthreshold region are given by equation 11 and assuming source and base to be shorted and both transistors have same W/L q[V2 −V1 ] 2 (12) 2nKT
I out = Ib Io 1− e 1+ e −2 q[V2 −V1 ] 2nKT
I out ⎛ q[V2 − V1 ] ⎞ = I b I o tanh ⎜ ⎟ ⎝ 2nKT ⎠
Equation 13 proves the functionality of the differential amplifier as a tan sigmoid function generator. Differential amplifier as NAF 2. To understand this. Designing for the bias current of 150 nA (W/L)5 =3.203 3. 3. Here two designs are considered for NAF 1. The same circuit should be able to output the neuron activation function and the differentiation of the activation function.3. Neuron Activation Function (NAF) Neuron activation function designed here is tan sigmoid. The design is basically a variation of the differential amplifier with modification for the differentiation output. This block is named as fun in the final schematic. The structure has to be modified for the differentiation output.5. As is evident from equation 13 Iout is the combination of bias current and the voltage input. Modified Differential Amplifier Design For Differentiation Output (fun) Schematic of the design shown in the figure 4 is used for the tan sigmoid function generator with modification for the differential output.2. 3. 3. Adders
Cyril Prasanna Raj P and S.L. The node connecting the respective outputs of the Gilbert cell.3. The differentiation of the activation function equation 13 is actually a sech2 (x). Pinjare
The output of the Gilbert cell is in the form of current (transconductance). Differential amplifier when design to work in the subthreshold region acts as a neuron activation function. The current for the differentiation output is ⎛ q[V2 − V1 ] ⎞ (14) I = I I sec h 2 ⎜ ⎟
1.8. Back Propagation Algorithm and Weight Updating The training is important part in the Neural Architecture.3. The circuit is designed to be functional in the subthreshold region.Design and Analog VLSI Implementation of Neural Network Architecture for Signal Processing
The schematic of the design is shown in the figure 5. Realisation of Neural Architecture using Analog Components
The components designed in the previous section are used to implement the neural architecture.10=4 and (W/L)4. The input layer is the input to the 2:3:1 neuron.
4. This block is used as the neuron activation function as well as for the multiplication purpose.2
Figure 5: Neuron Activation Function circuit
Figure 6: Implementation of the Neural Architecture using Analog Blocks
Figure 6 shows exactly how the neural architecture of figure 2 is implemented using analog components.6.1 The fun is the Neuron activation function circuit with differentiation output designed in section 3. implementing the equation 5 for
. The op is the output of 2:3:1 neuron. 4.7. (W/L)3. The hidden layer is connected to the input layer by weights in the first layer named as w1i. In equation 1 3 Iout is the multiplication of the input applied to the differential amplifier and the bias current of the amplifier.3.9=16 (W/L)=2.5. (W/L)1-2 is 16. The mult is the Gilbert cell multiplier designed in section 3. The output layer is connected to input layer through weights w2j. On the same basis the differentiation current is multiplied to the target and output difference.1. The tan block is the differential amplifier block designed in section 3.
The figure 9 shows the update mechanism and initializations of the weights. The ainput is the input to the hidden layer. 4.3 is
considering the weight.
∂ 1.L. Pinjare
Figure 7: Block diagram for weight update scheme for the output neuron
. the output of neuron is connected to as shown in the figure 8. The output from the mult blocks is weight update for the weights in the output layer.
Figure 8: Block diagram for weight update scheme for hidden layer neuron
This update requires the realization of the equation 7 ∂ hiddenlayer = d ( ai )∑ wij ∂ i . is then multiplied with the outputs of the hidden layer (input for the output layer) as shown in the figure 7.205
Cyril Prasanna Raj P and S. The output of the multiplication is then given to the differential amplifier with the bias current as the differentiation of the respective neuron output in the hidden layer. 4. Next step is to calculate the weight update using the equation 6 Δw =η∂ a ij i input obtained. 2.1 Updating the Hidden Layer Weights The hidden layer weights in the architecture are updated from the errors propagating from the output layer. ∂ i = ( ai − d i )d ( ai ) .
. The ∂ ∂ . which deals with the formation for the hidden layer.
ij i input
in this case the inputs v1 and v2. Weight Storage And Update Mechanism The weights for the proposed neural architecture are stored on a capacitor.1. Δ w = η ∂ a . The ∂ formed is then used to update the weights in the hidden layer as implied by the equation 6.2.
has to be formed for each neuron in the hidden layer.
using clock ClkI. it is given to the Cw when ClkI is high.
. Since the inputs are fed in the analog form to the network there is no need for analog to digital converters. The neural network has 3 neurons in the hidden layer and two in the output layer.1. Clock signal ClkW. The training algorithm used in this network is Back Propagation algorithm. This is one of the major advantages of this work. v2 is .3. as there are two inputs and one output. The compression achieved is 50%. The error propagates from decompression block to the compression bock. The 2:3:1 neuron proposed has an inherent capability of compressing the inputs. Once the network is trained for different inputs the two architectures are separated and can be used as compression block and decompression block independently. One has to make ClkI low before starting to train chip.2V pp and 10 mega Hz frequency. v1 voltage is . 4. Whatever voltage is applied to weight initialisation line. The weight initialisation can also be done external to the chip. Simulation Result for Gilbert cell Multiplier The designed Gilbert cell is simulated using HSPICE.Design and Analog VLSI Implementation of Neural Network Architecture for Signal Processing
Figure 9: Weight Update and Initialisation Scheme
Cw is used to store the weight and Cwd is used to store the weight update. Results and Discussions
5. A 1:3:2 neural networks is designed for the decompression purpose. else there is no update and previous value of the weight is maintained.2V pp 1 mega Hz frequency. Neuron Application -Image Compression and Decompression The network architecture proposed and designed in the previous section is used to compress image. Whenever the clock is high the weight is updated. is used for updating the weight. The simulation result shown in the figure 11 is for the multiplication of two voltages v1 and v2.
Figure 10: Image Compression and Decompression using proposed Neural architecture
5. Image consisting of pixel intensities are fed to the network shown in Figure 10 for compression and decompression. Figure 10 shows the compression and decompression scheme.
The output amplitude is 1. The characteristic shows a maximum of 2mV output for . The vout can be seen matching with the theoretical output.2.207
Cyril Prasanna Raj P and S.4v to . The output results suits the neuron architecture.4v.L. Simulation Result for Neuron activation function The neuron activation function is basically a tanh (x) function and differentiation
. 5. Pinjare
Figure 11: Multiplication operation of Gilbert cell multiplier (mult)
The wave vout is the multiplication of v1 and v2 voltage done by the circuit.4-multiplication output.5mV pp.
Figure 12: DC characteristics of Gilbert cell multiplier
The input voltages v1 and v2 are varied from -.
Design and Analog VLSI Implementation of Neural Network Architecture for Signal Processing
Figure 13: (a) y=tanh(x) (b) Derivative of y
is sech2(x). The theoretical value of y=tanh (x) and its derivative
∂y = sec h 2 ( x ) ∂x
is shown in the figure 13. The differential amplifier circuit designed to generate this function is designed for the values 3 to –3. The result of Neuron activation block
Figure 15: Neuron activation function and its derivative-DC analysis (fun)
Figure 14: Circuit output for Neuron Activation function block (tan)
The y value varies from 1 to -1.
3. Figure 17 shows the AND operation learned by the 2:3:1 Neural Architecture. The transient simulation for the sine wave is shown in the figure 16
Figure 16: Neuron Activation Function and Derivative Transient analysis
The figure 16 shows the output of the neuron activation circuit for a sine wave with . One can find the similarity between the figures 13. OR. The input voltages v1 and v2
Figure 17: AND operation learned by 2:3:1 Neural Architecture
.5v voltage and frequency of 10 mega Hz applied as the input. The designed circuit gives the differentiation if the activation function. The Neuron activation function output is the current variation from 0 to 140 mA. The differentiation output bumps from 0uA to 6uA. Pinjare
designed for generating the differentiation circuit is shown in the figure 15. thus validating the result.209
Cyril Prasanna Raj P and S.L. The derivative output is shown with the theoretical derivative output calculated by HSPICE tool. Simulation of 2:3:1 neural architecture Neural Architecture functionality was validated for both digital and the analog operation. The derivation of sine is a cosine. 14 and 15. XOR and NOT. The NAF output and corresponding differentiation output is shown. 5. The Neural Architecture functionality was verified for logic gates like AND.
The effect of the weights initialisation and the output swing of the multiplier block on the convergence is shown in the figure 20. Target given to the circuit also varies from 1 V to –1V.
Figure 19: XOR operation learned by 2:3:1 Neural Architecture
. The input voltages swing form 1 V to –1 V. This error is actually the difference between the target input and the network output. The weights were initialised to .5 Vpp. The output generated by the neuron.19 V. The output did not converge. The output generated by the neuron. The weights are initialised to the value 1 V.4 V. The output of the neural architecture swings from –. The XOR operation output of the Neural Architecture showed a voltage swing of 1. All the weights were initialised to 2 volt. The input voltage swing was 2 Vpp from 0 V to 2 V.5 Vpp. The target also was given as . This leads the network to fall in local minima. The inputs v1 and v2 for this operation were chosen of .23 V swing). as shown in figure 18. The reason for such operation is the generation of huge error as the derivative function gives a high output.Design and Analog VLSI Implementation of Neural Network Architecture for Signal Processing
are given to the architecture along with the target. The input voltages v1 and v2 swing form 1 V to –1 V for OR operation.26 V (1. Figure 19 shows the XOR operation of the Neural Architecture. Target given to the circuit for OR also varies from 1 V to – 1V.726 V to –3. The convergence of the output is depends on the weight initialisation and the output swings of the components designed to implement the Neural Architecture.02 V. Figure 18 shows a OR gate function of the Neural Architecture
Figure 18: OR operation learned by 2:3:1 Neural Architecture
The weights for the OR operation are initialised to the value 1 V. follows the target producing an output swing of 2. The weights are initialised to the value 1 V. clearly follows the target. Since the output of the multiplier is huge. the error will be large too.
Figure 21: NOT operation learned by 2:3:1 Neural Architecture
Figure 21 shows the NOT operation learned by the neural Architecture. Second input was connected to ground. The NOT operation output had a voltage swing of . The voltage was applied to only one input v1.3. The target was applied with 1 V swing. 1 V to 0 V.211
Cyril Prasanna Raj P and S. 0-1V. Validation for Analog operation Neural Architecture was designed with analog components. The simulation result for the sine wave learning is shown in the figure 22
The result is the output shown in the figure 20. 5. To avoid such behavior of the neural architecture
Figure 20: Effect of weight initialisation on convergence
the weights should be initialised to a lower value and Gilbert cell should be designed for low output swings.546V.L.
5 Vpp amplitude. As can be seen in the figure 22
Figure 23: 10 MHz Sine Wave Learning By Neural Architecture
the output clearly follows the target applied for the circuit for learning. The network was trained for the sine wave output with same frequency and amplitude. 5 V pp amplitude shown in figure 23. The input v1 was a sine wave with 50 KHz frequency and . The network faithfully reproduced the desired target of 10 MHz frequency.5 V.5 V and -.Design and Analog VLSI Implementation of Neural Network Architecture for Signal Processing
Figure 22: 50 KHz Sine Wave Learning By Neural Architecture
The analog input given to the network was a sine wave with v1 applied as . greater than the input signal. The network was made to learn 10 MHz frequency sine wave with. Figure 24 shows the result for the 100 KHz output generated from a 50 KHz input. The convergence time for 10 MHz was calculated as 200 ns with 1% error with respect to amplitude.
. The output swing was from +. The Neural network was experimented for generation of sine wave frequency and amplitude.5 Vpp and frequency 50KHz and v2 was connected to ground. In a way the network learned to replicate the input sine wave as an output.
5 Vpp voltage and 10 MHz frequency.51 Vpp.51 Vpp sine wave with 10 MHz frequency. The decompressed output for v1 was a 1. The compressed output was a DC signal of 233.
Figure 25: Image compression and Decompression Simulation
The decompressed output is shown in the same window figure 25. The input v1 was a sine wave with 1 Vpp voltage 5 MHz frequency and v2 was a sine wave with . Pinjare
Figure 24: Generation of 100 KHz from 50 KHz
The target for the output was set with a sine wave with 1. The output shown in the figure 24 is the sine wave of the 100 KHz frequency and amplitude of 1. The simulation result for image compression and decompression are shown in the figure 25.2 Vpp sine wave with 5 MHz frequency and v2 was a . validating the capability of the network to work as amplifier and frequency multiplier.L. 5.63 nV.5 Vpp amplitude and 100 KHz frequency.
. testing the amplification and frequency multiplication capability of the Neural architecture.213
Cyril Prasanna Raj P and S.4. As there is one output for 2 inputs there is a 50% compression. Image Compression and Decompression using Neural Architecture The Neural Architecture is extended for the application of image compression and decompression.
Layout drawn for the Design The layout was drawn for the design in . so that when used in other architecture there are more usable layers for routing. The result is obtained using HSPICE. The layout used only 2 layers of Metal 1 and 2.5 V. 5.6.5 (max input) 1 μA (max input) ±3V ±2.8 200 ns (1% error) Analog Digital and Analog.
Parameter Power supply Input Range for Gilbert cell Output Swing for Gilbert cell Output Range of NAF (tan) Output Range of NAF (fun) Differentiation output of NAF (fun) Input Range for Neural Architecture Output Swing of Neural Architecture Convergence time (10 MHz) Usability
Table 1 describes the summary of the simulation results obtained for the different blocks designed.5.Design and Analog VLSI Implementation of Neural Network Architecture for Signal Processing 5.35 micron technology using Virtuso®
Figure 26: Layout for 2:3:1 Neural Architecture without IO pads
The layout was drawn for the design keeping in mind the usage of the block in other neural networks. The convergence was verified for the analog input sine wave of 10 MHz and amplitude of . Summary of Results obtained
Table 1: Summary of Results Value ±3V ±3V ±1. The Neural Architecture designed was able to learn for both analog and digital input.4V (max input) ±2.
.5 (max input) ±2. The chip dimensions were 150μm2.
Neuron Activation function was designed for input range of ±3V and output range of 2. Pinjare • • Gilbert cell multiplier was designed with input range and maximum output swing of 1. The architecture proposed can be used with other existing architecture for neural processing. The designed Neural architecture had a convergence time of 200 ns for analog input with 1% error. Back Propagation algorithm was used for the training of the network.8V. The Neural Architecture works on the supply voltage ±3 V with the output swing of ±2.L. Maximum differentiation current output range 1 microamperes. The Neural network was shown to be useful for digital and analog operations.4V.215
Cyril Prasanna Raj P and S. Conclusion
• • • • • •
. 50% image compression was achieved using proposed Neural Architecture
6. A Neural architecture was proposed using these components.5 V.
Dresden. pp. 1997 Isik Aybay et al. Tata McGrawhill. New Delhi. Koosh “Analog Computation and Learning in VLSI” PhD thesis California institute of technology. 5. no. ISBN 0-07-052903-5 Bernabe Linares-Barranco et al. 506-517 Chung-Yu. 12. Quintana and Maria J. “A VLSI Neural Processor for Image Data Compression using SelfOrganisation Networks” IEEE Transactions on Neural Networks. 14. ISBN 0-07-463529-8 Razavi Behzad.141-166 Eric A. 49. “A 5-v CMOS Analog Multiplier” IEEE Journal of solid state circuits Vol sc22 No. “Analysis And Design Of Analog Microelectronic Neural Network Architectures With On-Chip Supervised Learning” Ph. 1217-1243
. Vol. Randy L. May 1992. Geiger. Bing-xue Shi and Lu Chen. 5. Lund. Vol. “An Analog VLSI Pulsed Neural Network for Image Segmentation using Adaptive Connection Weights” Dresden University of Technology. No. Avedillo. Vol. “Design of Analog CMOS Integrated Circuits”. “Weak Inversion In Analog And Digital Circuits” CCCD Workshop 2003. “Classification of Neural Network Hardware”. 9.2001 Roy Ludvig Sigvartsen. Liang P. Vol. 11-29 Vincent F. IDG Co. pp. 2002. May 1992..Design and Analog VLSI Implementation of Neural Network Architecture for Signal Processing
               Bose N. China 2002 Arne Heittmann.. pp .Vol 6 No 1. “Hardware Implementation of an Analog Accumulator for On-chip BP Learning Neural Networks” Institute of Microelectronics. Pasadena. Boahen. 701-713 Hussein CHIBLE. “Neural Network Fundamentals with graphs. K. “VLSI Implementation of Threshold Logic – A comprehensive Survey”. 27. 2-3 Wai-Chi Fang et al. “A Learnable Cellular Neural Network Structure With Ratio Memory For Image Processing”. IEEE Journal of Solid-state Circuits. Jose M. No. Andreou and Kwabena A. “Translinear Circuits in Subthreshold MOS” Analog Integrated Circuits and Signal Processing. 1996. Department of Electrical Engineering and Information Technology. No. California. Thesis in Microelectronics. 3. algorithms and Application”.. 3. IEEE Transactions on Neural Networks. pp. Tata McGraw hill.“A Modular T-Mode Design Approach for Analog Neural Network Hardware Implementations”. pp. 1713-1723 Valeriu Beiu. 2000 Shai. Cai-Qin. Chiu-Hung Cheng. IEEE Transaction on Circuits and Systems-1. pp. New Delhi. 1994 Chun Lu. Neural Network World. 2002. Tsinghua University Beijing. University of Genoa. 1996. University of Oslo. 1143-1146 Andreas G..6 December 1987. “An Analog Neural Network with On-Chip Learning” Thesis Department of informatics. December 2002.D. September 2003. Vol. Germany. Oct.Vittoz. pp.