You are on page 1of 6

LUND INSTITUTE OF TECHNOLOGY

Dept. of Electroscience

Exam in
ADAPTIVE SIGNAL PROCESSING (ETT042)
2005-12-16
Tid: 08.00–13.00
Sal: Kårhuset:Gasque

Transform tables and collection of mathematical formulas, calculator,


Allowed tools
the course literature in ASB and OSB (Haykin and Hayes, resp.)
No lecture notes distributed in .pdf format.
No handwritten solutions to problems.
Observandum Solve only a single problem per sheet of paper, and write your
name on each sheet!
Statements must be supported by theoretical discussions,
calculations or by specific references to the literature.
Grading 3 (2.0p), 4 (3.0p), 5 (4.0p).

1. Answer the following questions briefly:

a) Mark Jmin , J(n), Jex (∞) and wo for the LMS algorithm in an error surface.
(0.1)
b) Convergence and tracking are two aspects of the performance of an apative
filter. Explain which of the standard LMS and standard RLS that is best with
respect to each of these aspects based on the strategy to find the optimal
solution for the two methods. (0.2)
c) What does ERLE measure and in which context? (0.1)
d) Stalling is a property of Fixpoint LMS. What does it mean and can you always
avoid it (why or why not)? (0.2)
e) Name three different block structures in which adaptive filters are typically
used together with one important thing to consider when using each of them.
(0.2)
f) Name four different adaptive algorithms together with one reason to use each
of them and one reason to not use each of them. (0.2)

2. A data sequence, p(n), that can be approximated as white with variance σp2 = 1 is
sent through a channel, c(n). The received signal from the channel is measured in
additive white noise with variance σv2 = 0.3.

a) Connect an adaptive filter for the purpose of retrieving the original signal
(training phase). Draw a blockdiagram and label all branches. What is the
adaptive structure called? (0.2)

1
b) Assume c[n] = [ 0.5 −0.8 1 ] and an (adaptive) filter length of M=2. De-
termine the optimum filter, wo and Jmin . If necessary, try different delays.
(0.5)
c) Calculate Jex (∞) if µ = 41 µmax . (0.2)
d) Illustrate how you can handle slow changes in the channel. (0.1)

3. A mobile phone that uses a sampling rate of 8 kHz for the speech without echo
cancellation suffers from severe echo problems.

a) Explain how an echo occurs in such system and present an adaptive filter
strategy (block diagram with labels) for how you can reduce the echo. (0.3)
b) Select filter order M if the network delay in each direction is 10 ms, the distance
between the phone and the head of the speaker is 2cm and the distance between
the microphone and speaker is 8 cm. The reflexion in the head is considered
to be the longest echo path. There are also shorter ones inside the phone. The
speed of sound is 340 m/s. (0.4)
c) Which algorithm would you use (and why?) and how would you set the design
parameters for that algorithm? (0.2)
d) What is the requirement for good performance if the two speakers are speaking
at the same time (double-talk situation)? (0.1)

4. The RLS algorithm is applied to adapt a two-step predictor with the input signal
d(n) = cos(2πf n + θ) (u(n) = d(n − 2)) where θ is uniformly distributed within
[−π, π] and f = 1/6. The predictor is an FIR filter with p = 2 coefficients.

a) Determine the Wiener solution wo . (0.4)


b) Determine the first two iterations of w(n) if λ = 0.9, the initial value of the
inverse correlation matrix equals the identity matrix and w(0) = 0. Use θ = 0
in this part of the problem. (0.6)

5. Consider the Fixpoint LMS algorithm below

d1 (n) = d(n)2s1
b T (n)u(n)]2−s2
e1 (n) = d1 (n) − [w
w(n
b + 1) = w(n)
b + 2−s3 [u(n)e1 (n)].

Show that it in the mean converges to wo,f ix = 2s1+s2 wo

God Jul och Gott Nytt År!

2
LUND INSTITUTE OF TECHNOLOGY
Dept. of Electroscience

Exam in
ADAPTIVE SIGNAL PROCESSING (ETT042)
2005-12-16
Tid: 08.00–13.00
Sal: Kårhuset:Gasque

Transform tables and collection of mathematical formulas, calculator,


Allowed tools
the course literature in ASB and OSB (Haykin and Hayes, resp.)
No lecture notes distributed in .pdf format.
No handwritten solutions to problems.
Observandum Solve only a single problem per sheet of paper, and write your
name on each sheet!
Statements must be supported by theoretical discussions,
calculations or by specific references to the literature.
Grading 3 (2.0p), 4 (3.0p), 5 (4.0p).

1. Answer the following questions briefly:

a) See Haykin. (0.1)


b) The LMS jumps in the direction of the solution based on the present input
while the RLS calculates an optimal solution at each stage. Consequently, the
RLS is very fast at convergence since it does not need to converge, it only
needs enough data. The fact that the RLS remembers everything uo until the
last sample makes it slow for tracking while the LMS just continues to jump
in the new direction (0.2)
c) Echo return loss enhancement. The damping of the returning echo from d to
e in the echo cancellation problem (single-talk) (0.1)
d) Stalling is a result of limited precision. If the method need smaller values in
order to improve the adaptation it stalls. Stalling occurs when the gradient
becomes to small compared to the precision. This can be either because the s3
parameter shifts out to much (if the product om u and e becaomes too small).
(0.2)
e) Identifier - excitation and filter order, Equalizer - delay and additive channel
noise is problematic, Noise canceller - noise and signal in primary uncorrelated
and noise in reference correlated to noise in primary. (0.2)
f) LeakyLMS - good for singular correlation matrices but does not converge to
Wiener solution, FastLMS - fast calculations but time delay in FFT, RLS -
fast convergence but unstable in fixpoint implementation, Sign-LMS - easy to
implement but not Wiener solution (0.2)

2. a) Adaptive inverse filter, equalizer.

1
d(n)=p(n-D)
z −D
e(n)
u(n) d̂(n)
p(n) c ( n) w ( n)

v(n) LMS

Channel equalizer.
(0.2)
b)

ru (k) = E[(0.5p(n) − 0.8p(n − 1) + p(n − 2) + v(n))(0.5p(n − k) −


0.8p(n − k − 1) + p(n − k − 2) + v(n − k))]

ru (k) = (0.25 + 0.64 + 1)rp (k) + (−0.4 − 0.8)(rp (k + 1) +


rp (k − 1)) + 0.5(rp (k + 2) + rp (k − 2)) + rv (k)

rdu (k) = E[p(n − D)(0.5p(n − k) − 0.8p(n − k − 1) + p(n − k − 2) + v(n − k)))]

rdu (k) = 0.5rp (k − D) − 0.8rp (k + 1 − D) + rp (k + 2 − D)

Test D=0,1,2 and 3.


 
0.5
rdu (D = 0) =
0

 
−0.8
rdu (D = 1) =
0.5

 
1
rdu (D = 2) =
−0.8

 
0
rdu (D = 3) =
1

2
For D=3,
 
0.3576
wo (D = 3) =
0.6525

with Jmin = 0.3475 (0.5)


c) λmax = 3.39 which yields µ = 0.29/4. (0.2)
d) Decision feedback, see lab 2. (0.1)

3. a) The echo that a speaker receives is a result of leaking between the other
speakers loudspeaker and microphone. You can therefore use your own voice
to cancel the echo.
network
u(n) from microphone, subscriber A ← →

to speaker,

subscriber B
 ˆ
d(n) -
- FIR

filter ?
 H(z)

 +
? +

room B

Σ 

- adaptive 
algorithm
from microphone,
subscriber B

to speaker, subscriber A ?
 d(n)
 Σ 
 +
e(n)

(0.3)
b) The longest echo is 2*0.010s+sqrt(22 +42 )*2/100/340=22.7
ms corresponding
to 182 taps or 22 taps if the network delays are excluded. (0.4)
c) E.g., NLMS, µ = 0.5 (0.2)
d) That the signals from the two speakers are uncorrelated. (0.1)

4.
1 π
ru (k) = cos( k)
2 3
     
1 1 1/2 1 1/2 1
Ru = , rdu = , wo = Ru rdu =
−1
.
2 1/2 1 2 −1/2 −1
P(0) = I, w(0) = 0.
I[1/2 1]T 4
n=1: u(1) = [1/2 1]T , g(1) = T
= [1/2 1]T .
1 + [1/2 1]I[1/2 1] 9
α(1) = d(1) − wT (0)u(1) = u(2) = −1/2,

3
2
w(1) = w(0) + α(1)g(1) = − [1/2 1]T .
9
 
T 1 8 −2
P(1) = P(0) − g(1)u (1)P(0) =
9 −2 5
2
n=2: u(2) = [−1/2 1/2]T , g(2) = · · · = [−20 14]T .
53
α(2) = d(2) − wT (1)u(2) = −17/18,
1
w(2) = w(1) + α(2)g(2) = [13 − 25]T ≈ [0.25 − 0.47]T (jämför med wo ).
53
5.

d1 (n) = d(n)2s1
e1 (n) = d1 (n) − [wT (n)u(n)]2−s2
w(n + 1) = w(n) + α[u(n)e1 (n)] ⇒
wo,s = 2s1 +s2 wo = 2s1 +s2 R−1 p, Filtret skalas upp!

 
w(n + 1) = w(n) + α u(n) 2s1 d(n) − uT (n)w(n)2−s2
 
= I − α2−s2 u(n)uT (n) w(n) + α2s1 u(n)d(n)

Inför koefficientfelsvektorn ǫ(n) = w(n) − wo,s ,


 
ǫ(n + 1) = I − α2−s2 u(n)uT (n) ǫ(n) − α2−s2 u(n)uT (n)wo,s + α2s1 u(n)d(n)

Tag väntevärdet av uttrycket under oberoendeantagandet,


2s1 +s2 p
  z }| {
E{ǫ(n + 1)} = I − α2 R E{ǫ(n)} − α2−s2 Rwo,s +α2s1 p
−s2
 
= I − α2−s2 R E{ǫ(n)}

Diagonalisera R = QΛQT och inför ν(n) = QT E{ǫ(n)},


 
ν(n + 1) = I − α2−s2 Λ ν(n)
νi (n + 1) = (1 − α2−s2 λi )νi (n), i = 0, 1, . . . , M − 1

För konvergens krävs alltså


2
0 < α < 2s2 .
λmax

God Jul och Gott Nytt År!

You might also like