Code No: RR410212

Set No.1

IV B.Tech. I Semester Supplementary Examinations, February -2007 NEURAL NETWORKS AND APPLICATIONS (Electrical & Electronic Engineering) Time: 3 hours Max Marks: 80 Answer any FIVE Questions All Questions carry equal marks

1. (a) How do you justify that brain is a parallel distributed processing system? [8] (b) Explain the structure of a brain. 2. (a) Explain in detail “Recall in Neural Networks”. (b) Explain autonomous and non-autonomous dynamical systems. [8] [8] [8]

3. Write and discuss about Single layer Continuous Perceptron Training Algorithm. [16] 4. With neat Block diagram and flow chart, explain Error Back propagation algorithm. [16] 5. Design a simple continuous-time network using the concept of computational energy function and also evaluate stationary solutions of the network. [16] 6. What are kohonen’s self organizing maps? Explain the architecture and the training algorithm used for kohonen’s SOMs. [16] 7. What are the general equations for the hyper planes in two and three dimensions in space? What geometric figures do these equations describe? [16] 8. Explain how neurocomputing circuits can be modeled using digital and analog circuits. [16]

1 of 1

Code No: RR410212

Set No.2

IV B.Tech. I Semester Supplementary Examinations, February -2007 NEURAL NETWORKS AND APPLICATIONS (Electrical & Electronic Engineering) Time: 3 hours Max Marks: 80 Answer any FIVE Questions All Questions carry equal marks

1. (a) With the help of a neat diagram explain the analogy of a logical neuron. [8] (b) Explain what is an artificial neural network and show how a basic ANN is constructed using a biological neuron. [8] 2. Explain in detail the differences between competitive learning and differential competitive learning. [16] 3. Write and discuss about R - category Discrete Perceptron Training Algorithm. [16] 4. Demonstrate the main features of error back propagation algorithm applied to two layer network with an example. [16] 5. Discuss how vector field method can be used to illustrate the real time phenomena in networks with finite gain neurons? [16] 6. Explain the architecture of ART-1 neural networks with emphasis on the function of each part. What is the importance of the vigilance parameter in its working? [16] 7. State the linear programming problem. Explain how to solve optimization problems using neural networks. [16] 8. Explain template matching networks in neural processing. Draw a template bit map and the corresponding circuit diagram. [16]

1 of 1

Code No: RR410212

Set No.3

IV B.Tech. I Semester Supplementary Examinations, February -2007 NEURAL NETWORKS AND APPLICATIONS (Electrical & Electronic Engineering) Time: 3 hours Max Marks: 80 Answer any FIVE Questions All Questions carry equal marks

1. (a) Explain the significance of Acting potential and resting potential in the neural cells. [8] (b) Explain briefly how information in processed in neural networks. 2. Write notes on: (a) Error correction learning. (b) Reinforcement learning. [8] [8] [8]

3. Describe why a single layer perceptron network can’t learn the EXCLUSIVE-OR logic. Show how a two layer network can accomplish such a tasks. [16] 4. Draw the architecture of Feed Forward type and Recurrent type neural network and explain their difference. [16] 5. Discuss about the stability property of the dynamical system taking an example. [16] 6. Explain the difference between supervised and unsupervised learning. Give two applications where each type of learning is appropriate. Justify your answer. [16] 7. What are the two broad categories that pattern recognition techniques fall? Explain. [16] 8. What do you understand by finite resolution and conversion error. Explain the circuit producing a single digitally programmable weight employing a multiplying D/A converters (MDAC). [16]

1 of 1

Code No: RR410212

Set No.4

IV B.Tech. I Semester Supplementary Examinations, February -2007 NEURAL NETWORKS AND APPLICATIONS (Electrical & Electronic Engineering) Time: 3 hours Max Marks: 80 Answer any FIVE Questions All Questions carry equal marks

1. Discuss the classification of neural nets based on training, architecture and activation functions used. [16] 2. Explain in detail the concepts of transient state, steady state, equilibrium state and stable state. [16] 3. State and prove perceptron convergence theorem. [16]

4. Derive the learning rule for Back Propagation network. What are the major drawbacks. Suggest solutions, to over come these drawbacks. [16] 5. Explain about the recursive asynchronous update of corrupted digit 4. [16]

6. Write short notes on Grossberg layer and its training. Explain with an example. [16] 7. Derive a numerical solution for finding the solution of differential equation. [16]

8. What are the various active building blocks of neural networks? Explain the current mirror and inverter based neuron in detail. [16]

1 of 1