You are on page 1of 9

UNIT-1

(Cluster – I): INTRODUCTION TO NEURAL NETWORK


C1. Soft Computing vs. hard computing, historical perspective
C2. Biological neuron vs. artificial neuron
C3. Models of Artificial Neural Network, Feed Forward and Feed backward networks,
Training and learning
C4. Learning Methods
C5 Learning tasks
C6. Activation functions
C7. Basic McCulloch-Pitts model of NN
C8. Single Layer Perceptron, Perceptron architecture, Perceptron training algorithm

8 Marks questions
CG-1

Concept-1: Soft Computing vs. hard computing

1. Discuss briefly the features of hard computing and soft computing?


2. Illustrate any three applications of soft computing.

Concept-2: Biological neuron vs. artificial neuron

3. Explain the concept of artificial neuron & biological neuron. Sketch the model of
artificial neuron.
4. Draw a simple artificial neuron and discuss the calculation of net input.
Concept-3: Models of Artificial Neural Network

5. Define network architecture and give its classifications.


6. What are the basic models of an artificial neural network.

Concept-4: Learning Methods

7. Define learning and differentiate between various learning methods.


8. Explain supervised and unsupervised learning methods with examples.

CG-2

Concept-5: Learning tasks

9. Discuss various learning tasks


i. Linear regression
ii. Logistic regression
iii. Pattern Association
iv. Clustering
10. Explain the concept of classification and regression with necessary examples.

Concept-6: Activation functions

11. Discuss about various activation functions.


12. List and explain the commonly used activation functions.

Concept-7: Basic McCulloch-Pitts model of NN

13. Write a short note on McCulloch Pitts Neuron model and realize XOR logic function
using the above model.
14. Explain in detail the architecture of Mc Culloch – Pitts neuron model and also realize 2-
input AND gate, OR gate using the above neuron model.

Concept-8: Perceptron training algorithm

15. Explain how the linear separability concept is implemented using perceptron training
networks with AND function for bipolar inputs & targets.
16. Find the weights required to perform the following classification using perceptron
network. The vectors (1, 1, 1, 1) and ( -1, 1 -1, -1) are belonging to the class (so have
target value 1), vectors (1, 1, 1, -1) and (1, -1, -1, 1) are not belonging to the class (so
have target value -1). Assume learning rate as 1 and initial weights as 0.

4 Marks questions
CG-1

1. Compare the function of synapse in biological and artificial neurons.


2. How does learning takes place in supervised learning?
3. Compare feed forward and feedback networks.
4. Draw the architecture of fully connected recurrent neural network.
5. Give the significance of using bias.
6. Differentiate the concepts of clustering and association.
7. Distinguish linear and logistic regression.
8. Justify why Artificial Neural Network is called adaptive system during training

CG-2

9. In what ways bipolar representation is better than binary representation?


10. Draw and compare linear separability graphs for AND, OR, XOR functions.
11. Justify- XOR function is non linearly separable by a single decision boundary line.
12. How can the equation of a straight line be formed using linear separability?
13. State the activation function used in perceptron network.
14. List the limitations of single layer perceptron.
15. What is meant by epoch in training process?
16. Give the significance of learning rate.

2 Marks questions
1. Error based learning is also called?
2. What type of learning is output based learning?
3. Give the applications of artificial neural networks.
4. Indicate the difference between excitatory and inhibitory weighted interconnections.
5. What is the role of threshold in activation function?
6. What happens if the learning rate is too low?
7. What is the outcome of choosing very high learning rate?
8. Give the range of learning rate?
9. Draw the bipolar step activation function.
10. Differentiate between binary and bipolar sigmoidal activation functions.
11. What type of activation function is used in Rosenbalt’s perceptron?
12. Can XOR logic function be linearly separable by single layer perceptron?
13. What are the other names of Delta rule?
14. What type of learning rule is implemented in Adaline network?
15. Give some examples of neural networks employing supervised learning.
16. What type of activation function is used generally in the input layer?

UNIT-2

(Cluster – II): MULTILAYER PERCEPTRON


C1. Introduction, MLP (Multilayered Perceptron) network, Feed forward Neural Network
C2. Gradient Descent learning rule
C3. Back propagation
C4. Back propagation learning - input layer, output layer, hidden layer computations
C5. Error based Back Propagations, Limitations of Back-propagation algorithm
C6. Batch learning vs Online learning
C7. Hebbian Learning
C8. Competitive learning
8 Marks questions
CG-1:

Concept-1: Introduction, MLP network, Feed forward Neural Network

1. What is Multilayer perceptron? Explain its basic features and different phases of training.
2. Draw the architecture of Multilayer Perceptron network with 2 hidden layers and explain
the different phases involved in training the network.

Concept-2: Gradient Descent learning rule

3. Using Gradient descent learning rule derive the expressions for weights and bias
updations across various layers in a Back propagation neural network.
4. Describe the structure of Back Propagation neural network and derive the learning rule
for Back propagation algorithm.

Concept-3: Back propagation

5. Derive output equations and weight update equations for a multilayer feed forward neural
network using back propagation algorithm.
6. Find the new weights, using back-propagation network for the network shown in Figure.
The network is presented with the input pattern [-1, 1] and the target output is + 1. Use a
learning rate of a = 0.25 and bipolar sigmoidal activation function.

Concept-4: Back propagation learning - input layer, output layer, hidden layer computations

7. Explain Error back propagation training Algorithm with the help of flowchart.

8. Find new weight when the net illustrated in given figure is presented the input pattern
(0, 1) and target output is 1. Use a learning rate α = 0.25 and the binary sigmoid
activation function.

CG-2:

Concept-5: Limitations of Back Propagation algorithm

9. Explain the concept of local minima and global minima and give the limitations of Back
Propagation algorithm
10. Discuss the remarks of Back Propagation algorithm
Concept-6: Batch learning and On-line learning

11. Give the advantages of On-line learning over Batch learning


12. What is stochastic learning? Explain the two methods of supervised learning in
Multilayer Perceptron.

Concept-7: Hebbian Learning

13. Design Hebb Net to implement logical AND function. Use bipolar inputs and targets.
14. Using the Hebb rule, find the weights required to perform the following classifications of
the given input patterns shown in Figure. The pattern is shown as 3 x 3 matrix form in the
squares. The "+" symbol represents the value" 1" and empty squares indicate "-1."
Consider "I" belongs to the members of class (so has target value 1) and "0" does not
belong to the members of class (so has target value -1).

Concept-8: Competitive learning

15. Explain the concept of competitive learning network.


16. Discuss where and how competitive learning is used with an example.

4 Mark questions
CG1

1.Discuss the significance of hidden layers in neural networks.


2.What is the significance of error signal in perceptron network?
3.What type of activation function is used in multilayer perceptron neural network?
4.Give the significance of non-linear activation function in multilayer perceptron.
5.What is the significance of Gradient descent rule?
6.Explain in detail why single layer perceptron cannot be used for solving XOR problem
and how this is overcome by multilayer perceptron?
7. Discuss the techniques for proper choice of learning rate.
8. For derivative based learning procedure why a sigmoidal function is used instead of a
step function?
CG2
1. What are the various factors that influence the convergence of learning in Back
Propagation neural network.
2. How learning rate can be controlled in a multilayer network?
3. What is the use of momentum factor?
4. Briefly explain the terms local minima and global minima.
5. Give the limitations of Hebb rule.
6. How many hidden neurons do we have to choose in the network?
7. Give the constraints of batch learning?
8. Give the disadvantage of batch learning over stochastic learning.

2 Mark questions
1. Example by example learning is also called?
2. Epoch by epoch training is also termed as?
3. State the limitations of back propagation learning.
4. Give 2 applications of back propagation neural network.
5. What are the advantages of back propagation algorithm?
6. State the learning algorithm used in Multilayer perceptron
7. State the importance of back propagation algorithm.
8. List the stages involved in training of back propagation network.
9. State the significance of error portions δk and δj in BPN algorithm.
10. Why is gradient descent method adopted to minimize error?
11. Is competitive learning a supervised or unsupervised learning method?
12. What is the necessity of momentum factor in weight updation process?
13. Why online learning is called stochastic learning?
14. What is feature extraction?
15. List out some neural networks that employ supervised learning method.
16. Which net is related to “winner take all”?

UNIT-3

(Cluster – III): FUZZY LOGIC


C1. Fuzzy logic
C2. Fuzzy sets
C3. Fuzzy Vs Crisp
C4. Properties of crisp sets and fuzzy sets
C5. Operations on crisp sets and fuzzy sets
C6. Operations on crisp sets and fuzzy sets
C7. Examples on fuzzy sets
C8. Classical relations
8 Marks questions
CG-1
Concept-1: Fuzzy logic
1. What is a fuzzy logic? Explain any two applications of Fuzzy logic.
2. When are fuzzy systems used? Briefly explain the limitations of Fuzzy system?

Concept-2: Fuzzy sets


3. What are fuzzy sets and explain the importance of fuzzy sets.
4. Explain the concept of crisp sets and fuzzy sets

Concept-3: Fuzzy Vs Crisp

5. Define crisp sets and fuzzy sets and state the importance of fuzzy sets.
6. Explain fuzzy logic in detail and mention the difference between crisp logic and fuzzy
logic with examples.

Concept-4: Properties of crisp sets and fuzzy sets

7. Enlist and explain any three classical set properties any three fuzzy sets properties.
8. i) What is the cardinality of a fuzzy set? Whether a power set can be formed for a fuzzy
set?
ii) Find the power set and cardinality of the given set X= {2, 4, 6}. Also find cardinality
of power set.
CG-2

Concept-5: Operations on crisp sets and fuzzy sets

9. Enlist and explain any three classical set operations and any three fuzzy sets operations.
10. Consider two given fuzzy sets below:

Perform union, intersection, difference and complement over fuzzy sets A and B.

Concept-6: Operations on crisp sets and fuzzy sets

11. Write a short note on fuzzy sets and fuzzy operations.


12. With an example prove Demorgan’s Law of Equality for Fuzzy Sets.

Concept-7: Examples on fuzzy sets

13. Prove with an example why the law of excluded middle and the law of contradiction do
not hold good for fuzzy.
14. The discretized membership functions for a transistor and resistor are given below:
Find a) Algebraic sum b) Algebraic product c) bounded sum d) bounded difference.

Concept-8: Classical relation

15. Explain the concept of classical relations.


16. Explain about Cartesian product of a crisp relation.

4 Mark questions
CG1

1. In what context fuzzy systems are used?


2. What is the difference between crisp set and fuzzy set.
3. State the significance of fuzzy sets
4. What are the limitations of fuzzy logic?
5. Why is the cardinality of a fuzzy power set infinity?
6. Why the law of excluded middle and law of contradiction do not hold good for fuzzy
sets?
7. Give examples of empty fuzzy set and universal fuzzy set.
8. Justify the following statement: “Partial member ship is allowed in fuzzy sets”

CG2
9. Consider two fuzzy sets A1=0.2/y1+0.9/y2 and A2=0.3/y1+0.5/y2+1/y3. Find union of
the fuzzy sets.
10. Perform intersection operation on two fuzzy sets A=0.3/y1+0.1/y2+0.6/y3 and
B=1/y1+0.3/y2+0.6/y3.
11. Consider some fuzzy sets and prove the distributive law, associative law.
12. Consider some fuzzy sets and prove the Demorgan’s laws.
13. How are union and intersection operations defined in fuzzy logic?
14. How are the relations represented in various forms?
15. Given a fuzzy set B=1/x1+0.3/x2+0.6/x3. Find its complement.
16. Find difference of the two fuzzy sets A=0.3/y1+0.1/y2+0.6/y3 and
B=1/y1+0.3/y2+0.6/y3.

2 Mark questions
1. What are the methods of representation of a classical set?
2. What are the methods of representation of a classical set?
3. State whether a power set can be formed for a fuzzy set?
4. State the significance of a relation?
5. Give the expression for computing algebraic sum on fuzzy sets.
6. Give the expression for computing algebraic product on fuzzy sets.
7. Give the expression for computing bounded sum on fuzzy sets.
8. Give the expression for computing bounded difference on fuzzy sets.
9. What is law of excluded middle?
10. What is law of contradiction?
11. Mention few applications of fuzzy logic.
12. Find the cardinality of the classical set A= {3, 2, 5, 7, 8, 2}.
13. Give a classical set B= {2, 3, 5}. Determine the cardinality of the power set?
14. Find the cartesian product of sets X={3, 5, 7}, Y={a, b, c}.
15. What is the cardinality of a fuzzy power set?
16. Give the expression to perform complement of a fuzzy set.

You might also like