You are on page 1of 2

Department of Electronics & Communication Engg.

National Institute of Technology, Rourkela


MID-SEMESTER EXAMINATION. Spring 2018-19
CLASS:B.Tech & M Tech Dual Degree Full Marks: 30
SUBJECT:Soft Computing Time: 2 hours
SUBJECT CODE: EC 444

Answer all questions.


Figure inside [] in the right hand side margin indicate marks.
All parts of a question must be answered in one place

The question paper contains two pages

1. Express briefly the followings:


(a) The Wiener-Hoff solution [1]
(b) Gradient descent algorithm. [1]
(c) Hessian matrix and its use. [1]
(d) Convex set and convex function. [1]
(e) Momentum in gradient descent method. [1]

2. (a) Explain briefly the application of adaptive filter theory in differential adaptive predictive quantizer. [2.5]
(b) Explain briefly the steepest descent method and find the condition of the step size for which the
algorithm converges to the optimal value. [2.5]

3. (a) The perceptron may be used to perform numerous logic functions. Demonstrate the implementation
of binary logic functions AND,OR and complement. [2]
(b) In a variant of LMS algorithm called the leaky LMS algorithm, the cost function to be minimized is
2 2
defined by ε (n) = 12 |e (n)| + 21 λ ∥w̄ (n)∥2 , where w̄(n) is the parameter vector, e(n) is the estimation
error and λ is constant. As in the ordinary LMS algorithm, we have e(n) = d(n) − w̄T (n)x̄(n),
where d(n) is the desired response corresponding to the input vector x̄(n). (i)Show that the time
update parameter of the leaky LMS algorithm is defined by

ˆ (n) = (1 − ηλ) w̄
w̄ ˆ (n − 1) + ηx̄ (n) e (n)

(ii) Find the condition of the stepsize η for which the algorithm converges.
[3]
4. (a) Explain the design of the radial basis function network using the K-means clustering and RLS
algorithm. [3.5]
T
(b) Using the linear regression model di = w̄ φ̄i + εi , i = 1, ..., N
( ) ( )
i. Show that the least-square optimized estimate of w̄ is expressed as w̄ ˆ = w̄ + ϕ̄T ϕ̄ −1 ϕ̄T ε̄
[ ]T [ ]T
where ϕ̄ = φ̄1 φ̄2 · · · φ̄N and ε̄ = ε1 ε2 · · · εN . Assume that the error εi
is a sample of a white noise process of variance σ 2 .
[( )( ) ]
ii. Hence show that the auto-covariance matrix E w̄ − w̄ ˆ T = σ 2 R−1
ˆ w̄ − w̄
[1.5]

5. (a) Derive the back propagation algorithm for multilayer perceptron(MLP) [4]
(b) Explain the use of the momentum factor in the back propagation algorithm. [1]
Page 2 of 2

6. (a) Explain the convergence of the single layer perceptron [2]


(b) Show that a ball is a convex set. [1]
T T
(c) Show that the function f (x̄) = x̄ Ax̄ + b̄ x̄ + c is convex if the matrix A is positive definite. [2]

End

You might also like