You are on page 1of 4

NATIONAL INSTITUTE OF TECHNOLOGY ROURKELA

Department of Electrical Engineering


Soft Computing Techniques (EE6243)
Session: 2021 - 2022 (Autumn)
B.Tech, M.Tech. and Ph.D.

End Semester Examination

Full Marks: 50 Total Time: 2 hr


L INSTITUTE OF TECHNOLOGY
ROURKELA
Instructions:

• Answer each question on separate page. On the top of each answer script, write name, roll number,
question number and then solve. Highlight your final answer.

• Use Adobe Scan / CamScanner only for taking photo of your answer script. Save it as RollNo_Name_QNo.pdf.
Submission in any other format will not be considered for evaluation.

• Attach pdf files to appropriate link. Turn in/Hand in for submission.

1. Find (a) L1 norm of the gradient of the following function at [0.5 − 0.5]T and (b) percentage error involved
in second-order Taylor’s series expansion for ∆x̄ = [0.1 − 0.1]T . [2+3]

f (x̄) = (x1 − 1)2 ex2 + x1


(1)
2. Consider the multi-layer perceptron. Find (a) norm of the error vector and (b) update weight w22 for the
following condition. [3+3]

• Input x̄ = [0.2 0.1]T


• Desired output d¯ = [0.005 0.16]T
• µ(1) = 10, µ(2) = 1
• f1 : bipolar sigmoid activation function with α = 1
• f2 : binary sigmoid activation function with α = 1

0.5 0.15

0.02 0.02
0.

0.
04

05

0.5
4

0.25
3

0.0
0.0

0.01 0.01

3. An RBF neural network contains two neurons in the hidden layer. Consider the following input-output
patterns. Coordinates of centers, and distance between centers and input patterns, initial value of spreads
and weight vector are given next. Update (a) spread and (b) center of the 2nd neuron. Take µσ = 0.5, µc = 0.5. [3+3]
EE6201 End Semester Examination Page 2 of 4

4. Consider the following SOM lattice. For an input x̄ (given next), suppose neuron-6 is the winning neuron.
Weight vectors for neuron 6 and neuron 7 are also provided below. [3+2]
EE6201 End Semester Examination Page 3 of 4

7 8 9

Winning neuron

4 5 6

Iteration = 10
1 2 3

     
2 1.5 4
     
x̄ =  4  , w̄6 =  3.8  , w̄7 = −2
−1 −1.2 3

Find
(a) topological neighbourhood function value for neuron-7
(b) distance between input x and the updated weight of neuron-7.

5. Prove that the principal direction of orientation of a set of n-dimensional samples can be determined by
eigen value analysis, where eigen vectors of the covariance matrix represent principal components and the
eigen values indicate variance along the respective direction. [8]

6. Derive the expression of normal equation for polynomial regression with regularization. [5]

7. Consider linear input-output mapping using Adaline with bipolar sigmoid activation function. Suppose input
pattern x̄ = [−1.0 1.0]T , desired output d = 0.6, initial weight vector w̄ = [0.2 0.5]T , bias = 0.4, α for bipolar
sigmoid activation function = 0.1. Update the bias minimizing instantaneous error. Take µ = 1. [3]

8. Following optimization problem is solved using PSO. Coordinate of the ith particle, its local best and the
coordinate of the group best are given below. [2+3]

Minimize

Iteration k = 10
Maximum iteration kmax= 50

Find (a) cognitive component of the particle-i, and (b) function value after updating its position.

9. Find (a) optimal x̄ and (b) λ for the following problem: [2+2]

Minimize

Subject to:

10. Consider the below data for PCA. Covariance matrix and the principal directions are given next. [3]
EE6201 End Semester Examination Page 4 of 4

Find variance along each direction.

End of question paper. ALL THE BEST.

You might also like