You are on page 1of 2

pages(2) Reg.No: . . . . . . . . . . . . . . . . . . . . . . . .

Name : . . . . . . . . . . . . . . . . . . . . . . . .
M.TECH DEGREE EXAMINATION
Second Semester
Branch : Applied Electronics and Instrumentation; Specialization : Signal Processing
MAESP 201 ADVANCED DIGITAL SIGNAL PROCESSING
(2011 admission onwards)
[Regular]
MODEL QUESTION PAPER
Time: Three Hours Maximum Marks : 100
1. a) Derive the Yule Walker equation for an AR process of order M. Show that given
autocorrelation coecients r(0), r(1), . . . , r(M) we can derive the AR parameters.
Find this relationship. (15 marks)
b) Find the relationship of the variance of the zero mean white noise input to the LTI
(linear time invariant) system with the AR parameters and autocorrelation coecients
of the AR process. (10 marks)
Or
2. a) Derive the Wiener-Hopf equations for Linear Transversal lter. Express the matrix
formulation of the Wiener-Hopf equations. (20 marks)
b) Draw the Linear Transversal lter for the Wiener lter. (5 marks)
3. a) Explain the steepest descent algorithm applied to the Wiener lter. Derive the update
equation. (10 marks)
b) Draw the structure of the adaptive transversal lter for the steepest descent algorithm.
Also draw the bank of cross correlators for computing the corrections to the elements
of the tap-weight vector at time n. (5 marks)
c) State the condition on the step-size parameter for stability or convergence of the steepest
descent algorithm. (5 marks)
d) Explain the Newtons method for optimization. (5 marks)
Or
4. a) Explain the LMS (least mean square) algorithm. (10 marks)
b) For the LMS lter draw the
(i) block diagram of the adaptive transversal lter.
(ii) detailed structure of the transversal lter component.
(iii) detailed structure of the adaptive weight control mechanism.
(6 marks)
1
c) Dene misadjustment in the context of LMS algorithm. (4 marks)
d) What are the restrictions on the step size parameter for convergence of the LMS
algorithm. (5 marks)
5. a) Write the augmented Wiener-Hopf equations for forward linear prediction. (5 marks)
b) Write the Levinson Durbin recursion. (10 marks)
c) What are some of the advantages of Levinson Durbin recursion. (4 marks)
d) Consider a real valued wide sense stationary signal with autocorrelation values
r(0) = 1, r(1) = 0.5, r(2) = 0.5, r(3) = 0.25
Use Levinson Durbin recursion to solve the augmented Weiner-Hopf equation (equiva-
lently the autocorrelation normal equations) and nd a third order forward prediction
error lter. (6 marks)
Or
6. a) Explain the RLS (recursive least squares) algorithm with the exponential weighting
factor (or forgetting factor). Write down the update equations for the tap-weight
vector. (20 marks)
b) Write the matrix inversion lemma (Woodburys identity). (5 marks)
7. Write notes on the following topics of non-linear signal processing :
a) Non-Gaussian models. (5 marks)
b) Generalized Gaussian distributions. (10 marks)
c) Stable distributions. (10 marks)
Or
8. Write notes on the following topics of non-linear signal processing :
a) Median smoothers. (7 marks)
b) Rank/Order lters. (9 marks)
c) Weighted Median smoothers. (9 marks)
[4 25 = 100 marks]
2