You are on page 1of 2

KATHMANDU UNIVERSITY

Department of Electrical and Electronics Engineering


Internal Examination -2021
Course: Neural Network and Fuzzy Logic Course Code: ETEG 425
Full Marks: 40 Time: 1.5 hr
Attempt all Questions:

1. a) Suggest and explain activation model, learning method for solving non-linear
activation problems. [5]
2. b) What do you mean by supervised and unsupervised learning? Explain with suitable
examples. State and explain the XOR problem? How can it be overcomed? [2+2+1]

3. a) Discuss the operation of single neuron system. A neuron j receives inputs from four
other neurons whose activity levels are 10, -20, 4 and -2. The respective synaptic weights
of the neuron j are 0.8, 0.2, -1.0, and -0.9. Calculate the output of neuron for the
following two situations.
(i) The neuron is linear.
(ii) The neuron is represented by a McCulloch-Pitt s model.
Assume that the bias applied to the neuron is zero. [8]
b) Using Perceptron Learning Rule, determine the weight required to perform the
following pattern classifications. Vectors (1 1 1 1) and (-1 1 -1 -1) are members of the
first class. Vectors (111-1) and (1-1-11) are the members of second class. Use two output
neurons. Assume learning rate as 0.9 and initial weight of 0.25. Using training vectors as
input, test the response of the net. [7]

4. a) Consider the backpropagation neural network as shown below, assume that the
neurons have a logistic sigmoid activation function, do the following:
i) Perform a forward pass on the network.
ii) Perform a reverse pass (training) once (target = 1, ∝ =1).
iii) Perform a further forward pass and comment on the result. [5]

b) Calculate the neural network outputs and their errors for the feedforward neural
network with two inputs, two hidden neurons, two output neurons as shown in the
diagram.

1
• The initial weights, the biases, and training inputs/outputs are given in the diagram.
• Single training set: given inputs 0.05 and 0.10, the neural network targets 0.01 and 0.99.
• The activation function for hidden and output neurons is the logistic sigmoid function:
[5]

5. A Kohonen self-organizing map is used to cluster four vectors. Let the vectors to be
clustered be (1, 1, 0, 0); (0, 0, 0, 1); (1, 0, 0, 0); (0, 0, 1, 1) The maximum number of
clusters to be formed is m = 2. Suppose the learning rate is 𝛂 = 0.6, The neighborhood of
node J is set so that only one cluster updates its weights at each step (R = 0). Initial
weight matrix: [ 𝟎. 𝟐 𝟎. 𝟖 𝟎. 𝟔 𝟎. 𝟒 𝟎. 𝟓 𝟎. 𝟕 𝟎. 𝟗 𝟎. 𝟑 ]. Calculate the updates in the
weight matrix after training the network using the first vector, (1, 1, 0, 0). [5]

You might also like