You are on page 1of 74

ADAPTIVE FILTERS

Solutions of Computer Projects

Ali H. Sayed

Electrical Engineering Department


University of California at Los Angeles

c
2008
All rights reserved

Readers are welcome to bring to the attention of the author at sayed@ee.ucla.edu any typos or
suggestions for improvements. The author is thankful for any feedback.

No part of this material can reproduced or distributed without


written consent from the publisher.

ii

CONTENTS

PART I: OPTIMAL ESTIMATION

PART II: LINEAR ESTIMATION

PART III: STOCHASTIC-GRADIENT METHODS

20

PART IV: MEAN-SQUARE PERFORMANCE

34

PART V: TRANSIENT PERFORMANCE

41

PART VI: BLOCK ADAPTIVE FILTERS

44

PART VII: LEAST-SQUARES METHODS

49

PART VIII: ARRAY ALGORITHMS

56

PART IX: FAST RLS ALGORITHMS

58

PART X: LATTICE FILTERS

61

PART XI: ROBUST FILTERS

67

iii

iv

PART I: OPTIMAL ESTIMATION

Part I: Optimal Estimation

COMPUTER PROJECT
Project .1 (Comparing optimal and suboptimal estimators) The programs that solve this project are
the following.
1. psk.m This function generates a BPSK signal x that assumes the value +1 with probability p and the
value 1 with probability 1 p.
2. partB.m This program generates four plots of x
N , as a function of N , one for each value of p see
Fig. 1.
p=0.1

1
Estimate

Estimate

1
0
1
2

p=0.3

0
1

10

p=0.5

10

1
Estimate

Estimate

6
p=0.8

1
0
1
2

Number of observations

Number of observations

0
1

Number of observations

10

10

Number of observations

Figure I.1. The plots show the values of the optimal estimates x
N for different
choices of N (the number of observations) and for different values of p (which
determines the probability distribution of x).

3. partC.m This program generates a plot that shows {


xN , x
N,av } for four different values of p over the
interval 1 N 300 see Fig. 2.
N,av , x
dec , x
sign }. Typical values are
4. partD.m This program estimates the variances of {
xN , x
{0.0031, 0.1027, 0.0040, 0.0040}

Part I: Optimal Estimation

p=0.1

p=0.3

0.5

1.5

Estimate

Estimate

1
1.5

2
Optimal estimates
Averaged estimates

2.5
0

100

200

Optimal estimates
Averaged estimates

0.5
0

300

Number of observations

100

p=0.5

Optimal estimates
Averaged estimates

2
Estimate

Estimate

300

p=0.8

2.5

Optimal estimates
Averaged estimates

1
0
1
2
0

200

Number of observations

1.5
1

100

200

Number of observations

300

0.5
0

100

200

300

Number of observations

Figure I.2. The plots show the values of the optimal estimates x
N (dotted
lines) and the averaged estimates x
N,av (solid lines) for different choices of N
(the number of observations) and for different values of p (which determines
the probability distribution of x). Observe how the averaged estimates are
significantly less reliable for a smaller number of observations.

PART II: LINEAR ESTIMATION

Part II: Linear Estimation

COMPUTER PROJECTS
Project II.1 (Linear equalization and decision devices) The programs that solve this project are the
following.
1. partA.m This program solves parts (a) and (c), which deal with BPSK data. The program generates
5 plots that show the time and scattering diagrams of the transmitted sequence {s(i)}, the received
sequence {y(i)} (which is the input to the equalizer), and the equalizer output {
s(i )} for several
values of . A final plot shows the values of the estimated m.m.s.e. and the number of erroneous
decisions as a function of . The program allows the user to change v2 . Typical outputs of this
program are shown in the sequel.
Figure 1 shows four graphs displayed in a 2 2 matrix grid. The graphs on the left column pertain to
the transmitted sequence s(i), while the graphs on the right column pertain to the received sequence
y(i). The graphs on the bottom row are scatter diagrams; they show where the symbols {s(i), y(i)} are
located in the complex plane for all 2000 transmissions. We thus see that the s(i) are concentrated at
1, as expected, while the y(i) appear to be concentrated around the four values {1.5, 0.5, 0.5, 1.5}.
The graphs on the top row are time diagrams; they show what values the symbols {s(i), y(i)} assume
over the 2000 transmissions. Clearly, the s(i) are either 1, while the y(i) assume values around
{0.5, 1.5}.
Received sequence

1
Amplitude

Amplitude

Transmitted sequence

0
1

0
1

2
0

2
0

2000

Im

Im

2000

Time

Time

1
1

1
Re

1
2

0
Re

Figure II.1. The plots show the time (top row) and scatter (bottom row) diagrams of the transmitted and received sequences {s(i), y(i)} for BPSK transmissions. Observe how the s(i) are concentrated at 1 while the y(i) are
concentrated around {0.5, 1.5}.

Figure 2 again shows four graphs displayed in a 2 2 matrix grid. The graphs on the left column
pertain to the received sequence y(i), while the graphs on the right column pertain to the output of
(i ), for = 0, i.e., s
(i). The graphs on the bottom row are the scatter diagrams
the equalizer, s
while the graphs on the top row are the time diagrams. Observe how the symbols at the output of the
equalizer are concentrated around 1.
Figure 3 is similar to Fig. 2, except that it relates to = 1. Comparing Figs. 2 and 3, observe how the
scatter diagram of the output of the equalizer is more spread in the latter case. Figure 4 is also similar
to Fig. 2, except that it now relates to = 3. Observe how the performance of the equalizer is poor in
this case.
Figure 5 shows two plots: one pertains to the m.m.s.e as a function of , while the other pertains to
the number of erroneous decisions as a function of . It is seen that although the choice = 0 leads to
the lowest m.m.s.e. at the assumed SNR level of 25 dB, all three choices = 0, 1, 2 lead to zero errors.
Figure 6 again shows the time and scatter diagrams of the equalizer input (left) and output (right) using
= 0 and v2 = 0.05, i.e., SNR= 14 dB. Observe how the equalizer performance is more challenging at
this SNR level. Still, the number of erroneous decisions came out as zero in this run.

Part II: Linear Estimation


Equalizer output for =0

1
Amplitude

Amplitude

Equalizer input

0
1

0
1

2
0

2
0

2000

2000

Time

Time
1

Im

Im

1
2

0
Re

1
2

0
Re

Figure II.2. The plots show the time (top row) and scatter (bottom row)
(i)}, for = 0
diagrams of the input and output of the equalizer, i.e., {y(i), s
and BPSK transmissions. Observe how the equalizer output is concentrated
around 1.

Equalizer output for =1

1
Amplitude

Amplitude

Equalizer input

0
1

2
0

2
0

2000

Time

2000

Time

Im

Im

1
2

0
Re

1
2

0
Re

Figure II.3. The plots show the time (top row) and scatter (bottom row) dia(i)}, for = 1 and
grams of the input and output of the equalizer, i.e., {y(i), s
BPSK transmissions. Observe how the equalizer output is again concentrated
around 1.

2. partB.m This program solves parts (b) and (d), which deal with QPSK data. The program generates 3
plots that show the scatter diagrams of the transmitted sequence {s(i)}, the received sequence {y(i)},
and the equalizer output {
s(i )} for several values of . A final plot shows the values of the
estimated m.m.s.e. and the number of erroneous decisions as a function of . The program allows the
designer to change the value of the noise variance v2 . Typical outputs of this program are shown in
the sequel.
Figure 7 shows the scatter diagrams of the transmitted (left) and received (right) sequences, {s(i), y(i)}.

Part II: Linear Estimation


Equalizer output for =3

1
Amplitude

Amplitude

Equalizer input

0
1

0
1

2
0

2
0

2000

2000

Time

Time
1

Im

Im

1
2

0
Re

1
2

0
Re

Figure II.4. The plots show the time (top row) and scatter (bottom row)
(i)}, for = 3
diagrams of the input and output of the equalizer i.e., {y(i), s
and BPSK transmissions. Observe how the performance of the equalizer is poor
in this case.

800
Number of errors

0.8

m.m.s.e.

0.6
0.4
0.2
0
0

2
Delay ()

600
400
200
0
0

Delay ()

Figure II.5. The plots show the m.m.s.e (left) and the number of erroneous
decisions (right) as a function of for BPSK transmissions.

Observe how the s() are concentrated at 2(1 j)/2, as expected, while the y(i) appear to be
concentrated around 16 values.
Figure 8 shows the scatter diagrams of the equalizer input (left) and output (right) sequences for the
cases = 0 and = 1. Observe how the equalizer output is now concentrated around the QPSK
constellation symbols.
Figure 9 repeats the same scatter diagrams for the cases = 2 and = 3. Observe how the equalizer
fails completely for = 3.
The plot showing the m.m.s.e. and the number of erroneous decisions as a function of is similar to
Fig. 5. Figure 10, on the other hand, shows the time and scatter diagrams of the equalizer input (left)
and output (right) using = 0 and = 1 for v2 = 0.05, i.e., SNR= 14 dB. The equalizer performance
is more challenging at this SNR level. Still, the number of erroneous decisions came out as zero in this
run.
3. partE.m This program solves parts (e) and (g) for BPSK data and for various noise levels. It generates
a plot of the symbol error rate, measured as the ratio of the number of erroneous decisions to the total
number of transmissions (adjusted to 20000) as a function of the SNR at the input of the equalizer.
The plot shows the SER curves with and without equalization. A typical output is shown in the left
plot of Fig. 11.
4. partF.m This program solves parts (f) and (g) for QPSK data and for various noise levels. It also
generates a plot of the symbol error rate. The plot shows the SER curves with and without equalization.

Part II: Linear Estimation


Equalizer output for =0

1
Amplitude

Amplitude

Equalizer input

0
2

0
1

4
0

2
0

2000

Time

Im

Im

2000

Time

0
Re

1
2

0
Re

Figure II.6. The plots show the time (top row) and scatter (bottom row)
(i)}, for = 0,
diagrams of the input and output of the equalizer, i.e., {y(i), s
BPSK transmissions, and v2 = 0.05 (i.e., SNR= 14 dB).

Received sequence

1
Im

Im

Transmitted sequence

0
1
2
2

0
1

0
Re

2
2

0
Re

Figure II.7. The plots show the scatter diagrams of the transmitted (left)
and received (right) sequences {s(i), y(i)} for QPSK transmissions.

A typical output is shown in the right plot of Fig. 11.

Part II: Linear Estimation

Equalizer output for =0

1
Im

Im

Equalizer input

0
1
2
2

0
1

0
Re

2
2

1
Im

Im

0
Re

Equalizer output for =1

Equalizer input

0
1
2
2

0
1

0
Re

2
2

0
Re

Figure II.8. The plots show the scatter diagrams of the equalizer input (left)
and output (right) for the cases = 0 and = 1 for QPSK transmissions.

Equalizer output for =2

1
Im

Im

Equalizer input

0
1
2
2

0
1

0
Re

2
2

1
Im

Im

0
Re

Equalizer output for =3

Equalizer input

0
1
2
2

0
1

0
Re

2
2

0
Re

Figure II.9. The plots show the scatter diagrams of the equalizer input (left)
and output (right) for the cases = 2 and = 3 for QPSK transmissions.

10

Part II: Linear Estimation

Equalizer output for =0

1
Im

Im

Equalizer input

0
1

0
1

2
2

0
Re

2
2

1
Im

Im

0
Re

Equalizer output for =1

Equalizer input

0
1

0
1

2
2

0
Re

2
2

0
Re

Figure II.10. The plots show the scatter diagrams of the equalizer input (left)
and output (right) for QPSK data with = 0 (top row) and = 1 (bottom
row) and at SNR= 14 dB.

BPSK

QPSK

10
1

Without
equalization

Without
equalization

10

SER

SER

10

With
equalization

10

10

With
equalization

10

10

15
SNR (dB)

20

10

15
SNR (dB)

20

Figure II.11. Plots of the SER over 20000 transmitted BPSK symbols (left)
and QPSK symbols (right), with and without equalization, for SNR values
between 8 and 20 dB and using = 1.

11

Part II: Linear Estimation

Project II.2 (Beamforming) The programs that solve this project are the following.
1. partA.m This program solves part (a). It generates two plots. One plot shows a portion of
the baseband sinusoidal wave s(t) along with the corresponding signals received at the four
antennas. A second plot shows s(t) and the output of the beamformer see Figs. 12 and 13.
The estimated value of the m.m.s.e. is 0.0494, whereas the theoretical m.m.s.e. is 0.05.
Baseband signal

2
0
2
2

Signal at antenna 1

0
2

Signal at antenna 2

2
0
2

Signal at antenna 3

2
0
2
2

Signal at antenna 4

0
2
200

250

300

350

400
450
Sample index

500

550

600

Figure II.12. The plots show the baseband signal s(t) (top) along with the
signals received at the four antennas. In this simulation, the noise at each
antenna is assumed white, and the noises across the antennas are assumed
uncorrelated.

Baseband signal and output of beamformer

2
Baseband signal (smooth curve)
Beamformer output

Amplitude

1
0
1
2
200

250

300

350

400
450
Sample index

500

550

600

Figure II.13. The plot shows the baseband signal (solid line) and the output
of the beamformer (dotted line).

2. partB.m This program solves part (b). It generates two plots. One plot shows the baseband
sinusoidal waves at 30o and 45o , along with the signals received at the four antennas. A second
plot shows the baseband waves again, along with the combined signal s(t), and the beamformer
output superimposed on the signal arriving at 30o see Figs. 14 and 15. The estimated value
of the m.m.s.e. is 0.067, while the theoretical m.m.s.e. is 0.05.
3. partC.m This program solves part (c). It generates four plots. Two of the plots deal with the
case of a single baseband signal at 30o ; they plot the baseband signal and the signals at the
antennas, as well as the baseband signal and the beamformer output. Two other plots deal
with the case of two incident signals at 30o and 45o ; they plot the incident baseband signals
and the signals at the antennas, as well as the combined signal and the beamformer output

12

Part II: Linear Estimation

see Figs. 1619. The estimated value of the m.m.s.e. is 0.0907 for a single incident wave and
0.1013 for the case of two incident waves, while the theoretical m.m.s.e. is 0.0901.
Baseband signal arriving at 30o

1
0
1
0.2

Interference signal arriving at 45o

0
0.2
2

Signal at antenna 1

0
2
2

Signal at antenna 2

0
2
2

Signal at antenna 3

0
2
2

Signal at antenna 4

0
2
200

250

300

350

400
450
Sample index

500

550

600

Figure II.14. The plots show the baseband signal at 30o (first from top) and
the interfering signal at 45o (second from top), along with the signals received
at the four antennas.

Baseband signal and beamformer output

2
0

Baseband signal at 30o


Beamformer output

2
Baseband signal arriving at 30o

1
0
1

Interference signal arriving at 45o

0.2
0
0.2

Combination of baseband and interference signals

2
0
2
200

250

300

350

400
450
Sample index

500

550

600

Figure II.15. The figure shows the baseband signal at 30o and the interfering
signal at 45o (middle plots), along with the combined signal (bottom plot), and
the beamformer output (dotted line) superimposed on the signal arriving at
30o (solid line in top plot).

4. partD.m This program solves part (d) and generates two polar plots that show the power gain
of the beamformer as a function of see Fig. 20.

13

Part II: Linear Estimation

Baseband signal for the case of spatiallycorrelated noise

2
0
2

Signal at antenna 1

2.5
0
2.5

Signal at antenna 2

2.5
0
2.5
2.5

Signal at antenna 3

0
2.5
2.5

Signal at antenna 4

0
2.5
200

250

300

350

400
450
Sample index

500

550

600

Figure II.16. The plots show the baseband signal (top) along with the signals
received at the four antennas. In this simulation, the noise at each antenna is
assumed white, and the noises across the antennas are spatially correlated.

Baseband signal arriving at 30o for the case of spatiallycorrelated noise

1
0
1
0.5

Interference signal arriving at 45

0
0.5
2.5

Signal at antenna 1

0
2.5
2.5

Signal at antenna 2

0
2.5
2.5

Signal at antenna 3

0
2.5
2.5

Signal at antenna 4

0
2.5
200

250

300

350

400
450
Sample index

500

550

600

Figure II.17. The plots show the baseband signal at 30o (first from top), the
interfering signal at 45o (second from top), along with the signals received at
the four antennas for the case of spatially-correlated noise.

14

Part II: Linear Estimation

Baseband signal and beamformer output for spatiallycorrelated noise

2
Baseband signal (smooth curve)
Beamformer output

Amplitude

1
0
1
2
200

250

300

350

400
450
Sample index

500

550

600

Figure II.18. The plot shows the baseband signal (solid line) and the output
of the beamformer (dotted line) in the case of spatially-correlated noise.

Baseband signal arriving at 30o and beamformer output

2
Baseband signal (smooth curve)
Beamformer output

0
2
200
2

250

300

350

250

300

350

400

450

500

550

600

0
2
200

Combined baseband and interference signals

400
450
Sample index

500

550

600

Figure II.19. The figure shows the combined incident signal (bottom), along
with the beamformer output (dotted line) superimposed on the signal arriving
at 30o (solid line in top plot).

90

90

20

120

25

120

60

60
20

15

15
10

150

150

30

30
10

180

330

210

300

240
270

180

330

210

300

240
270

Figure II.20. A polar plot of the power gain of the beamformer in dB as a


function of the direction of arrival for the case of 4 antennas (left) and 25
antennas (right).

15

Part II: Linear Estimation

Project II.3 (Decision feedback equalization) The programs that solve this project are the
following.
1. partA.m This program solves part (a) and generates a plot showing the impulse response sequence and the magnitude of the frequency response of the channel, |C(ej )| over [0, ]. Its
output is shown in Fig. 21.
Impulse response

Frequency response

1.5
8

0.5
dB

Amplitude

7
6

0.5
5
1
0

1
2
(rad/sample)

Tap index

Figure II.21. Impulse and frequency response of the channel C(z) = 0.5 +
1.2z 1 + 1.5z 2 z 3 .

2. partB.m This program solves part (b) and generates a plot showing a scatter diagram of the
transmitted and received sequences {s(i), y(i)}. A typical output is shown in Fig. 22.
Received sequence

2
Im

Im

Transmitted sequence

0
2
4
4

0
2

0
Re

4
4

0
Re

Figure II.22. Scatter diagrams of the transmitted (left) and received sequences (right), {s(i), y(i)}, for QPSK transmissions.

3. partC.m This program solves parts (c) and (d) and generates three plots. One plot shows
scatter diagrams for the received sequence {y(i)} and the sequence at the input of the decision
device (for the case = 5) see Fig. 23. A second plot shows the number of erroneous
decisions as a function of , as well as the m.m.s.e. (both theoretical and measured) as a
function of see Fig. 24. A third plot shows the impulse response sequences of the channel,
the combination channel-feedforward filter, and the feedback filter delayed by see Fig. 25.
The impulse response of the feedback filter has also been extended by some zeros in the plot in
order to match its length with the convolution of the channel and the feedforward filter for ease
of comparison. Observe how the taps of the feedback filter cancel the post ISI. The number of
errors for this simulation was zero.
4. partE.m This program generates a plot showing the number of erroneous decisions as a function
of the length of the feedforward filter, L see the plot on the left in Fig. 26.
5. partF.m This program generates a plot showing the number of erroneous decisions as a function
of the length of the feedback filter, Q see the plot on the right in Fig. 26.
6. partG.m This program solves part (g) and generates a plot showing the symbol error rate as
a function of the SNR level at the input of the equalizer. Each point in the plot is measured
using 2 104 transmissions. A typical output is shown in Fig. 27.

16

Part II: Linear Estimation


Received sequence

Input of decision device

Im

Im

2
0

2
4
4

0
Re

2
2

0
Re

Figure II.23. Scatter diagrams of the received sequence (input of the equalizer) and the sequence at the input of the decision device for = 5 and QPSK
transmissions.
8
6
1000

m.m.s.e.

Number of errors

1500

500

Measured values

4
Theoretical values

2
0
1

10
Delay ()

15

0
0

10

15

Delay ()

Figure II.24. The plot on the left shows the number of erroneous decisions
as a function of , while the plot on the right shows the m.m.s.e. as a function
of as deduced from both theory and measurements.

7. partH.m This program solves part (h) and generates a plot that shows the scatter diagrams of
the received sequence and of the sequence at the input to the decision device (for L = 4 and
= 4). The plot also shows the number of erroneous decisions as a function of L, as well as
the theoretical and estimated values of the m.m.s.e. for various L. A typical output is shown
in Fig. 28. The number of errors in this simulation was zero.
8. partI.m This program solves part (i) and generates a plot that shows the impulse and frequency
responses of both the actual channel and its estimate, as well as scatter diagrams of the received
sequence {y(i)} and the sequence at the input of the decision device see Fig. 29.
9. partJ.m This program solves part (j) and generates a plot that shows the impulse and frequency
responses of the actual channel and its estimate, as well as scatter diagrams of the received
sequence {y(i)} and the sequence at the input of the decision device see Fig. 30.

17

Part II: Linear Estimation

Channel convolved with


feedforward filter

Channel

1.5

1.5

Post ISI

1
Amplitude

Amplitude

1
0.5
0

0.5
0

0.5
1
0

0.5
0

Tap index

10
Tap index

15

Delayed feedback filter

0.5

Amplitude

0
0.5
Feedback
taps

1
1.5
0

10
Tap index

15

Figure II.25. The top left plot shows the impulse response sequence of the
channel. The two plots on the right show the convolution of the channel
and feedforward filter impulse responses (top) and the feedback filter response
delayed by (bottom).

Number of errors

Number of errors

1500

1000

500

0
1

5
10
Feedforward filter length

15

2
3
4
5
Feedback filter length

Figure II.26. Number of erroneous decisions as a function of the feedforward


filter length using = 5 and Q = 2 (left), and as a function of the feedback
filter length using = 5 and L = 6 (right).

SER

10

10

10

10

11

12

SNR (dB)

Figure II.27. The plot shows the symbol error rate as a function of the SNR
level at the input of the equalizer. The simulation assumes = 5, L = 6 and
Q = 1.

18

Part II: Linear Estimation

Input of decision device

1
Im

Im

Received sequence

0
2

0
Re

2
2

1000
0.6
m.m.s.e.

Number of errors

4
4

500

0
Re

Theoretical and
measured m.m.s.e.

0.4
0.2

0
1

5
10
15
Feedforward filter length

20

5
10
15
Feedforward filter length

20

Figure II.28. The plots in the first row show the scatter diagrams of the
received sequence and of the sequence at the input of the decision device for
L = 4 and = 4, using a linear equalizer structure. The plots in the second
row show the number of erroneous decisions as a function of the feedforward
filter length (L), as well as the theoretical and estimated values of m.m.s.e. for
various values of L.

Impulse responses of
original and estimated channels

Frequency responses of
original and estimated channels

10

8
dB

Amplitude

Original
Estimate

1
1

2
3
Tap index

4
0

Input of decision device

Received sequence

2
1
Im

2
Im

1
2
(rad/sample)

0
2

0
1
DFE

4
4

0
Re

2
2

0
Re

Figure II.29. The plots in the first row show the impulse and frequency
responses of the channel and its estimate. The plots in the second row show
the scatter diagrams of the received sequence and the sequence at the input of
the decision device. This simulation pertains to a DFE implementation with
L = 6, Q = 1, and = 5.

19

Part II: Linear Estimation

Frequency responses of
original and estimated channels

Impulse responses of
original and estimated channels

10
Original
Estimate

8
dB

Amplitude

1
0

6
1
2
1

4
0

1
2
(rad/sample)

Tap index
Received sequence

Input of decision device

1
Im

Im

0
2

0
1
Linear equalization

4
4

0
Re

2
2

0
Re

Figure II.30. The plots in the first row show the impulse and frequency
responses of the channel and its estimate. The plots in the second row show
the scatter diagrams of the received sequence and the sequence at the input of
the decision device. This simulation pertains to a linear equalizer with L = 4
and = 4.

PART III:
STOCHASTIC-GRADIENT
METHODS

20

21

Part III: Stochastic-Gradient Methods

COMPUTER PROJECTS
Project III.1 (Constant-modulus criterion) The programs that solve this project are the following.
1. partA.m This program prints the locations of the global minima as well as the minimum value of the
cost function at these minima. It also plots J(w).
2. partC.m This program generates three plots see Figs. 13. Each plot shows the evolution of w(i) as
a function of time, as well as the evolution of J[w(i)].
Weight trajectories for two different initial conditions and stepsize =0.2

w(i)

0.5

w(1)=0.3

0
w(1)=0.3

0.5
1
0

10
Iteration

12

Trajectory superimposed
on cost function

16

18

1.1
J[w(i)]

1.1
J[w(i)]

14

Trajectory superimposed
on cost function

0.9

0.7

0.9

0.7
1

0.5

0
w(i)

0.5

0.5

0
w(i)

0.5

Figure III.1. The top plot shows the evolution of the scalar weight w(i) from
two different initial conditions and for = 0.2; the filter is seen to converge
to the global minimum that is closest to the initial condition. The bottom
plot shows the evolution of J[w(i)] superimposed on the cost function J(w) for
both initial conditions.

3. partD.m This program generates a plot showing the contour curves of J(w) superimposed on the trajectories of the weight estimates see Fig. 4.

22

Part III: Stochastic-Gradient Methods

Weight trajectories for two different initial conditions and stepsize =0.6

1
0.5
w(i)

w(1)=0.3

0
w(1)=0.3

0.5
1
0

10
Iteration

12

16

18

1.1
J[w(i)]

J[w(i)]

1.1

14

Trajectory superimposed
on cost function

Trajectory superimposed
on cost function

0.9

0.7

0.9

0.7
1

0.5

0
w(i)

0.5

0.5

0
w(i)

0.5

Figure III.2. The top plot shows the evolution of w(i) starting from two
different initial conditions and for = 0.6. Due to the larger value of the
step-size, the filter is seen to oscillate around the global minimum that is
closest to the initial condition. The bottom plot shows the evolution of J[w(i)]
superimposed on the cost function J(w) for both initial conditions.

Weight trajectory using stepsize =1 and initial condition w(1)=0.2

w(i)

0.5
0
0.5
1
0

10

20

30

40

50
Iteration

60

70

80

90

Trajectory superimposed on the cost function

J[w(i)]

1.1
1
0.9
0.8
0.7
1

0.5

0
w(i)

0.5

Figure III.3. The top plot shows the evolution of w(i) starting from two
different initial conditions and for a large step-size, = 1. In this case, the
filter does not converge.

23

Part III: Stochastic-Gradient Methods

Contour curves and two trajectories starting from different initial conditions

1
12

0.8
0.6

10
0.4

axis

0.2

0
6
0.2
0.4

0.6
2

0.8
1
1

0.5

0
axis

0.5

Figure III.4. A plot of the contour curves of J(w), along with the trajectories
of the resulting weight estimates starting from two different initial conditions
(one is on the left and the other is on the right). Here w = col{, }.

24

Part III: Stochastic-Gradient Methods

Project III.2 (Constant-modulus algorithm) The programs that solve this project are the following.
1. partA.m This program generates a plot showing the learning curve of the steepest-descent method and
an ensemble-average learning curve for CMA2-2. The program allows the user to modify the values of
, w1 , L, and N . A typical output is shown in Fig. 5.
Theoretical and ensembleaverage learning curves

Steepestdescent
CMA22

4.5
4
3.5

J(w )

3
2.5
2
1.5
1
0.5
0
0

50

100

150

200

250
Iteration

300

350

400

450

500

Figure III.5. Learning curve of the steepest-descent method corresponding


2
to the cost function J(w) = E |uw|2 , along with an ensemble-average
curve generated by running CMA2-2 and averaging over 200 experiments.

2. partB.m This program generates two plots. One plot shows the contour curves of the cost function along
with the four weight-vector trajectories generated by CMA2-2 (see Fig. 6). The second plot shows the
same contour curves with the weight-vector trajectories generated by steepest-descent (see Fig. 7). The
program allows the user to modify the values of , w1 , and N .

25

Part III: Stochastic-Gradient Methods

Contour curves and weightvector trajectories starting from four different initial conditions

1
12

0.8
0.6

10
0.4

axis

0.2

0
6
0.2
0.4

0.6
2

0.8
1
1

0.5

0
axis

0.5

2
Figure III.6. Contour curves of the cost function J(w) = E |uw|2 ,
along with the trajectories generated by the steepest-descent method from
four distinct initial conditions.

Contour curves and weightvector trajectories for CMA22 starting from different initial conditions

1
12

0.8
0.6

10
0.4

axis

0.2

0
6
0.2
0.4

0.6
2

0.8
1
1

0.5

0
axis

0.5

2
Figure III.7. Contour curves of the cost function J(w) = E |uw|2 ,
along with the trajectories generated by CMA2-2 from four distinct initial conditions.

26

Part III: Stochastic-Gradient Methods

Project III.3 (Adaptive channel equalization) The programs that solve this project are the following.
1. partA.m This program solves part (a) and generates two plots showing the scatter diagrams of the transmitted, received, and estimated sequences {s(i), u(i), s(i )} during training and decision-directed
modes of operation. A typical output is shown in Figs. 89. No erroneous decisions were observed in
this simulation.
Transmitted sequence

2
Im

Im

Training sequence

0
2
4
4

0
2

0
Re

4
4

0
Re

Figure III.8. Scatter diagrams of the QPSK training sequence (left) and the
16-QAM transmitted data (right).

Equalizer output

10

2
Im

Im

Received sequence

20

0
10
20
20

0
2

10

0
Re

10

20

4
4

0
Re

Figure III.9. Scatter diagrams of the received sequence (left) and the output
of the equalizer (right). Observe how the equalizer outputs are centered around
the symbols of the 16-QAM constellation.

2. partB.m This program solves part (b) and generates a plot showing the scatter diagrams of the output
of the equalizer for three lengths of training period (150, 300, and 500), and for two adaptation schemes
(LMS and -NLMS), as shown in Fig. 10. No erroneous decisions were observed in this simulation in
the last two cases of -NLMS, while 1 error was observed in the last case for LMS. Observe how the
performance of LMS is inferior. If the training period is increased further, the performance of LMS can
be improved.
3. partC.m This program solves part (c) and generates a plot showing the scatter diagram of the output
of the equalizer when the transmitted data is selected from a 256-QAM constellation and the equalizer
is trained using -NLMS with 500 QPSK symbols. A typical output is shown in Fig. 11. No erroneous
decisions were observed in this simulation.
4. partD.m This program solves part (d) and generates a plot of the SER curve versus SNR for several QAM
constellations. A typical output is shown in Fig. 12. Observe how, for a fixed SNR, the probability of
error increases as the order of the constellation increases.
5. partE.m This program solves part (e) and generates two plots of the scatter diagrams of the input
and output of the equalizer for two different configurations. A typical output is shown in Fig. 13. No
erroneous decisions were observed in both simulations.
6. partF.m This program solves part (f) and generates a plot of the SER curve versus SNR for several QAM
constellations. A typical output is shown in Fig. 14.
7. partG.m This program solves part (g). Figure 15 shows the impulse and frequency responses of the
channel, and it is seen that the channel has a pronounced spectral null; a fact that makes the equalization
task more challenging. Figure 16 shows the frequency and impulse responses of the channel, the RLS

27

Part III: Stochastic-Gradient Methods

LMS/150 symbols

LMS/300 symbols

10
10

10

5
5

0
Re

Im

10
10

NLMS/150 symbols

Im

Im

5
5

0
Re

5
5

10

NLMS/300 symbols

Im

LMS/500 symbols

10

Im

Im

10

0
Re

NLMS/500 symbols

5
5

Figure III.10. Scatter diagrams of the signal at the output of the equalizer
for three different number of training symbols, 150, 300, and 500, and for both
algorithms, LMS (top row) and -NLMS (bottom row).

Output of equalizer for 256QAM transmissions

20

15

10

Im

10

15

20
20

15

10

0
Re

10

15

20

Figure III.11. Scatter diagram of the signal at the output of the equalizer. The equalizer is trained with 500 QPSK symbols using NLMS and
the transmitted data during the decision-directed mode is from a 256-QAM
constellation.

equalizer at the end of the adaptation process, and of the convolution of the channel and equalizer
impulse response sequences (we are only plotting the real parts of the impulse response sequences).
Observe how the combined frequency response still exhibits a spectral null; the equalizer has not been
successful in fully removing the null from the channel spectrum (i.e., it has not been successful in
flattening the channel). The program also generates a plot of the scatter diagrams of the transmitted
4-QAM data and the outputs of the linear equalizer for two cases. In one case, the equalizer is trained
with RLS for 100 iterations, and in the second case the equalizer is trained with -NLMS for 2000
iterations. A typical output is shown in Fig. 17. Observe how -NLMS fails for this channel while no
erroneous decisions were observed for RLS in this simulation. The performance of RLS can be improved
by increasing the length of the equalizer.

28

Part III: Stochastic-Gradient Methods

SER x SNR (logarithmic scale)

10

256QAM
1

SER

10

10

16QAM

4QAM

64QAM

10

10

10

15

20

25

30

SNR (dB)

Figure III.12. Plots of the SER as a function of the SNR at the input of
the equalizer and the order of the QAM constellation. The adaptive filter is
trained with -NLMS using 500 QPSK training symbols.

Equalizer output (using 10 feedforward taps)

10

20

5
Im

Im

Received sequence

40

0
20
40
40

20

0
Re

20

10
10

40

10

20

5
Im

40

0
20
40
40

0
Re

10

Equalizer output (using 20 feedforward taps)

Received sequence

Im

0
5

20

0
Re

20

40

10
10

0
Re

10

Figure III.13. Scatter diagrams of the input and output of the decisionfeedback equalizer for 64-QAM transmissions. The top row corresponds to a
DFE with 10 feedforward taps, 2 feedback taps, and delay = 7. The lower
plot corresponds to a DFE with 20 feedforward taps, 2 feedback taps, and delay
= 10.

29

Part III: Stochastic-Gradient Methods

SER x SNR for a DFE (logarithmic scale)

10

256QAM

SER

10

10

64QAM

16QAM

4QAM

10

10

10

15

20

25

30

SNR (dB)

Figure III.14. A plot of the SER as a function of the SNR at the input of
the equalizer and the order of the QAM constellation. The DFE is trained with
-NLMS using 500 QPSK training symbols.

Impulse response

Frequency response

Amplitude

0.5

dB

20
0

40

0.5
0

10

20
Tap index

60
0

30

1
2
(rad/sample)

Figure III.15. Impulse (left) and frequency (right) responses of channel.

Freq. response of equalizer

dB

dB

20
40

20

10

10
0

(rad/sample)

(rad/sample)

Impulse resp. of equalizer

0.5

0.5

2
0
2

0.8
0.6
0.4
0.2
0

4
10
20
30
Tap index

(rad/sample)

Impulse resp. of combination

4
amplitude

Amplitude

30

Impulse resp. of channel

20

amplitude

60
0

Freq. response of combination

30

dB

Freq. response of channel

10

30
Tap index

50

20

50
Tap index

80

Figure III.16. Frequency and impulse responses of the channel (left), the RLS
equalizer at the end of the adaptation process (center), and of the convolution
of the channel and equalizer (right).

30

Part III: Stochastic-Gradient Methods

Training sequence

RLS/100 symbols

0
Re

Im

1
1

NLMS/2000 symbols

Im

Im

2
2

0
Re

5
5

0
Re

Figure III.17. Scatter diagrams of the transmitted sequence and the outputs
of the equalizer for two training algorithms: RLS (center) and -NLMS (far
right). The former is trained with 100 QPSK symbols while the latter is trained
with 2000 QPSK symbols.

31

Part III: Stochastic-Gradient Methods

Project III.4 (Blind adaptive equalization) The programs that solve this project are the following.
1. partA.m This program solves part (a) and generates two plots. Figure 18 shows the impulse responses
of the channel, the equalizer, and the combination channel-equalizer. Observe that the latter has a
peak at sample 19 so that the delay introduced by the channel and equalizer is 19. Figure 19 shows a
typical plot of the scatter diagrams at transmission, reception, and output of the equalizer.
Equalizer impulse resp.

0
1
0

Impulse resp. of combination

0.2

Amplitude

Amplitude

Amplitude

Channel impulse resp.

0.2
1
2
Tap index

0.8
0.6
0.4
0.2
0
0

20
Tap index

20
Tap index

Figure III.18. Impulse responses of the channel, the blind equalizer, and
the combination channel-equalizer for QPSK transmissions. Observe that the
latter has a peak at sample 19.

Transmitted sequence

Received sequence

0
Re

Im

1
1

Equalizer output

Im

Im

5
5

0
Re

1
1

0
Re

Figure III.19. Scatter diagrams of the transmitted sequence, received sequence, and the output of the equalizer for QPSK transmissions and using
CMA2-2. The first 2000 samples are ignored in order to give the equalizer
sufficient time for convergence.

2. partB.m This program solves part (b) and generates three plots. Figure 20 shows the impulse responses
of the channel, the equalizer, and the combination channel-equalizer. Figure 21 shows a typical plot
of the scatter diagrams at transmission, reception, and output of the equalizer. Observe that although
symbols from 16QAM do not have constant modulus, CMA2-2 still functions properly; albeit more
slowly. Figure 22 shows the real and imaginary parts of 50 transmitted symbols and the corresponding
50 decisions by the slicer (with the delay of 19 samples accounted for in order to synchronize the
signals). These 50 symbols are chosen from the later part of the data after the equalizer has converged.
Channel impulse resp.

Equalizer impulse resp.

Impulse resp. of combination

0
1
0

0.2

Amplitude

Amplitude

Amplitude

0.2
1
2
Tap index

20
Tap index

0.5

0
0

20
Tap index

Figure III.20. Impulse responses of the channel, the blind equalizer, and the
combination channel-equalizer for 16-QAM transmissions.

3. partC.m This program solves part (c) and generates three plots as in part (b). Here we only show in
Fig. 23 the resulting scatter diagrams for MMA. Observe how the scatter diagram at the output of the
equalizer is more focused when compared with CMA2-2.
4. partD.m This program solves part (d) and generates scatter diagrams for three additional blind adaptive
algorithms. Typical outputs are shown in Fig. 24.

32

Part III: Stochastic-Gradient Methods

Transmitted sequence

Received sequence

0
Re

Im

5
5

Equalizer output

20

Im

Im

20
20

0
Re

5
5

20

0
Re

Figure III.21. Scatter diagrams of the transmitted sequence, received sequence, and the output of the equalizer for 16-QAM transmissions and using
CMA2-2. The first 15000 samples are ignored in order to give the equalizer
sufficient time for convergence.

Re(transmission)

Amplitude

5
0
5

10

20 Rel(decision) 30

40

50

10

20Im(transmission)30

40

50

10

20 Im(decision) 30

40

50

10

20

40

50

Amplitude

5
0
5
Amplitude

5
0
5
Amplitude

5
0
5

30
Sample index

Figure III.22. Time diagrams of 50 transmitted symbols and the corresponding decisions after equalizer convergence. The top two plots correspond to the
real parts, while the bottom two plots correspond to the imaginary parts.

Transmitted sequence

0
Re

Im

Im

Im

20

5
5

MMA equalizer output

Received sequence

20
20

0
Re

20

5
5

0
Re

Figure III.23. Scatter diagrams of the transmitted sequence, received sequence, and the output of the equalizer for 16-QAM transmissions and using
MMA. The first 15000 samples are ignored in order to give the equalizer sufficient time for convergence.

33

Part III: Stochastic-Gradient Methods

Transmitted sequence

Received sequence

0
Re

20
20

Transmitted sequence

5
5

20

0
Re

20
20

Im

0
Re
RCA equalizer output

20

Im

Im

0
Re

Received sequence

5
5

Im

5
5

CMA12 equalizer output

20

Im

Im

0
Re

20

5
5

0
Re

Im

Transmitted sequence

20

5
5

20
20

20

Re

Figure III.24. Scatter diagrams of the transmitted (left) and received sequences (center), and of the output of the equalizer (right) for three blind
algorithms: CMA1-2 (top row), RCA (middle row), and stop-and-go algorithm
(bottom row).

PART IV: MEAN-SQUARE


PERFORMANCE

34

35

Part IV: Mean-Square Performance

COMPUTER PROJECTS
Project IV.1 (Line echo cancellation) The programs that solve this project are the following.
1. partA.m This program generates two plots showing the impulse response sequence of the channel as
well as its frequency response, as shown in Fig. 1. It is seen that the bandwidth of the echo path model
is approximately 3.4 kHz.
Impulse response

Frequency response

30

0
dB

Amplitude

0.1

60
0.1
90
0.2
20

40
60
Tap index

80

2000
Hz

4000

Figure IV.1. Impulse and frequency responses of the echo path model.

2. partB.m This program solves parts B and C see Fig.2. The powers of the input and output signals
are 0 and 6.3 dB, respectively, so that ERL 6.3 dB and the amplitude of a signal going through the
channel is reduced by 1/2.
Composite source signal

Signal spectrum

10

10
dB

Amplitude

2
0

30

2
4

1000

50
0

5000

amplitude
0.5

1.5
Time

4000

Echo signal

2000
Hz

Farend signal

Amplitude

3000
Time

2.5
4

x 10

0.5

1.5
Time

2.5
4

x 10

Figure IV.2. The top row shows the CSS data and their spectrum, while the
bottom row shows 5 blocks of CSS data and the resulting echo.

3. partD.m This program solves parts D and E and generates two plots. Figure 3 shows the echo path
impulse response and its estimate, while Fig. 4 shows the far-end signal, the echo, and the error signal.
The resulting ERLE was found to be around 56 dB. Consequently, the total attenuation from the far-end
input to the output of the adaptive filter is seen to be ERL + ERLE 62 dB.
4. partF.m The power of the echo signal is the MSE of the adaptive filter, so that


v2
Perror (dB) 10 log10 v2 +
= 39.42 dB
2
Since Pecho 6.3 dB, we find that the theoretical ERLE is 33.1 dB. The simulated value in this
experiment is 29.5 dB. The simulated value will get closer to theory if the simulation is run for a
longer period of time.

36

Part IV: Mean-Square Performance

Echo path impulse response and its estimate


Echo path
Path estimate

0.1

0.05

Amplitude

0.05

0.1

0.15

0.2
20

40

60
Tap index

80

100

120

Figure IV.3. The figure shows the impulse response sequences of the echo
path and its estimate by the adaptive filter.

Farend signal

Amplitude

5
Echo signal

Amplitude

5
Error signal

Amplitude

0.5

1.5

2.5

3
Time

3.5

4.5

5.5
4

x 10

Figure IV.4. The top row shows the far-end signal, while the middle row
shows the corresponding echo and the last row shows the resulting error signal.

37

Part IV: Mean-Square Performance

Project IV.2 (Tracking a Rayleigh fading channel) The programs that solve this project are the following.
1. partB.m This program generates three plots. One plot shows the amplitude and phase components
of a Rayleigh fading sequence, while a second plot shows the cumulative distribution function of the
amplitude sequence. Typical outputs are shown in Figs. 5 (for the first 500 samples) and 6. The latter
figure can be used, for example, to infer that the amplitude of |x(n)| is attenuated by more than 10 dB
for 10% of the time, and so forth.
Magnitude plot of a Rayleigh fading sequence

|x(n)| (dB)

0
10
20
30
100

300

500

Time
Phase plot of a Rayleigh fading sequence

arg (x(n)) (rad)

2
100

300

500

Time

Figure IV.5. Amplitude and phase plots of a Rayleigh fading sequence with
Doppler frequency fD = 66.67Hz and sampling frequency fs = 1 kHz.

Observe that the sequence x(n) in Fig. 5 undergoes rapid variations. This is because in this case,
the Doppler frequency and the sampling frequency are essentially of the same order of magnitude
(fD = 66.67Hz and fs = 1000Hz). In loose terms, the inverse of fD gives an indication of the time
interval needed for the sequence x(n) to undergo a significant change in its value. In our example,
this time amounts to 15ms, which translates into 15 samples at the assumed sampling rate. For the
sake of comparison, Fig. 7 plots the amplitude component of another Rayleigh fading sequence with
the same Doppler frequency fD = 66.67Hz but with sampling period Ts = 1s. The ratio of fs to fD
is now 15000 samples so that the variations in x(n) occur now at a significantly slower pace.

2. partC.m This program solves parts C, D, and E. It generates a plot of the learning curve of LMS by
running it for 30000 iterations and averaging over 100 experiments. The channel rays are assumed to
fade at 10Hz and the sampling period is set to Ts = 0.8s. Figure 8 shows a typical trajectory of
the amplitude of the first ray and its estimate by the LMS filter. The leftmost plot shows that the
trajectories are in very good agreement. The rightmost plot zooms onto the interval [1000, 1100].
Figure 9 shows only the first 1000 samples of a typical learning curve. In addition, the simulated and
theoretical MSE values in this case are 0.001056 and 0.001026, respectively. The theoretical values
obtained from the expressions shown in parts (d) and (e) are almost identical. This is because at
fD = 10Hz, the value of is unity for all practical purposes. Also, the difference that is observed
between simulation and theory for the MSE is in part due to the approximate first-order auto-regressive
model that is being used to model the Rayleigh fading behavior.
3. partF.m This program solves part F and generates a plot of the MSE as a function of the Doppler
frequency over the range 10Hz to 25Hz. A typical output is shown in Fig. 10. It is seen that as the
Doppler frequency increases, the tracking performance of LMS deteriorates. This behavior is expected
since higher Doppler frequencies correspond to faster variations in the channel; remember that a higher
Doppler frequency translates into a larger covariance matrix Q and, therefore, into more appreciable
channel nonstationarities.

38

Part IV: Mean-Square Performance


Cumulative distribution function

10

cdf

10

10

10

30

25

20

15
10
|x(n)| (dB)

Figure IV.6. The cumulative distribution function of a Rayleigh fading sequence corresponding to a vehicle moving at 80Km/h.

Magnitude plot of a slowly Rayleigh fading sequence

|x(n)| (dB)

10

10

10

200

400

600

800

1000

1200

1400

1600

1800

2000

Time

Figure IV.7. Amplitude plot of a Rayleigh fading sequence with Doppler


frequency fD = 66.67Hz and sampling frequency fs = 1MHz.

4. partG.m This program solves part G. It generates a plot of the filter MSE as a function of the step-size.
The Doppler frequency is fixed at 10Hz for both rays, and the MSE values are obtained by averaging
over 50 experiments see Fig. 11.

39

Part IV: Mean-Square Performance

Rayleigh fading ray


and its estimate

A closer look

Amplitude

Amplitude

1.46

0.5

Actual ray

1.458

Estimate

0
0

2
Time

1.456
1000

x 10

1050
Time

1100

Figure IV.8. The leftmost plot shows a typical trajectory of the amplitude of
the first Rayleigh fading ray and its estimate (both trajectories are in almost
perfect agreement). The rightmost plots zooms on the interval 1000 i
1100.

An ensembleaverage learning curve for LMS


when used to track a Rayleigh fading channel

MSE (dB)

10

15

20

25

30

35
0

100

200

300

400

500
Iteration

600

700

800

900

1000

Figure IV.9. The first 1000 samples of a learning curve that results from
tracking a two-ray Rayleigh fading channel using LMS.

40

Part IV: Mean-Square Performance

MSE vs. Doppler frequency by simulation and theory

29

MSE (dB)

Simulation
Theory

29.5

30
10

15

20

25

f (Hz)
D

Figure IV.10. A plot of the tracking MSE of the LMS filter as a function of
the Doppler frequency of the Rayleigh fading rays.

MSE vs. stepsize for LMS tracking of a Rayleigh fading channel

28.8
Simulation
Theory

29

MSE (dB)

29.2

29.4

29.6

29.8

30

0.01

0.02

0.03

0.04

0.05

0.06

0.07

Stepsize ()

Figure IV.11. A plot of the tracking MSE as a function of the step-size for
Rayleigh fading rays with Doppler frequency at 10Hz.

PART V: TRANSIENT
PERFORMANCE

41

42

Part V: Transient Performance

COMPUTER PROJECT
Project V.1 (Transient behavior of LMS) The programs that solve this project are the following.
1. partA.m This program solves parts A, B, and C. It generates data {ui , d(i)} with the desired specifications. Figure 1 shows a typical ensemble-average learning curve for the weight-error vector, when the
step-size is fixed at = 0.01. The ensemble-average curve is superimposed onto the theoretical curve.
It is seen that there is a good match between theory and practice. Figure 2 plots the MSD as a function
of the step-size. Observe how the filter starts to diverge for step-sizes close to 0.1. The rightmost plot
in Fig. 2 shows the behavior of the function f () over the same range of step-sizes. The plot shows
that f () exceeds one around 0.092. We thus find that the simulation results are consistent with
theory.
o

LMS theoretical and ensembleaverage learning curves for E ||w wi||


Case of Gaussian regressors

0
Simulation
Theory (smooth curve)

5
10
15

MSD (dB)

20
25
30
35
40
45

1000

2000

3000

4000

5000

Iteration

Figure V.1. Theoretical and simulated learning curves for the weight-error
vector of LMS operating with a step-size = 0.01 and using Gaussian regressors.

MSD vs. stepsize for LMS


for Gaussian regressors
Simulation
Theory

1.5

0
f()

MSD (dB)

20

Plot of f()

1
Stability
bound

20

0.0917

0.5

40
0.04

0.06 0.08
0.1
Stepsize

0.12

0.04

0.06 0.08
0.1
Stepsize

0.12

Figure V.2. Theoretical and simulated MSD of LMS as a function of the


step-size for Gaussian regressors (leftmost plot). The rightmost plot shows the
behavior of the function f () over the same range of step-sizes.

2. partD.m This program generates two plots. Figure 3 shows a typical ensemble-average learning curve
for the weight-error vector, when the step-size is fixed at = 0.05. The ensemble-average curve is
superimposed onto the theoretical curve that follows from Prob. V.7. It is seen that there is also a good
match between theory and practice for non-Gaussian regressors.

43

Part V: Transient Performance


o

LMS theoretical and ensembleaverage learning curves for E ||w wi||


Case of nonGaussian regressors

0
Simulation
Theory (smooth curve)

10

MSD (dB)

15

20

25

30

35

40
0

100

200

300

400

500
Iteration

600

700

800

900

1000

Figure V.3. Theoretical and simulated learning curves for the weight-error
vector of LMS operating with a step-size = 0.05 and using non-Gaussian
regressors.

Figure 4 plots the MSD as a function of the step-size. The theoretical values are obtained by using the
following expression
MSD = 2 v2 r0T (I F )1 q,

EMSE = 2 v2 r0T (I F )1 r

{r, r0 , q, F }

where
are defined in the statement of the theorem. Observe how the filter starts to diverge
for step-sizes close to 0.1. Actually, using the estimated values for A and B that are generated by the
program, it is found that
1
0.103,
max (A1 B)

1

0.204
max (H) IR+

so that we again find that the simulation results are consistent with theory.

MSD vs. stepsize for LMS with nonGaussian regressors

40
MSD (dB)

Simulation

20

Theory

0
20
40
0.03

0.04

0.05

0.06
0.07
Stepsize

0.08

0.09

0.1

Figure V.4. Theoretical and simulated MSD of LMS as a function of the


step-size for non-Gaussian regressors.

PART VI: BLOCK ADAPTIVE


FILTERS

44

45

Part VI: Block Adaptive Filters

COMPUTER PROJECT
Project VI.1 (Acoustic echo cancellation) The programs that solve this project are the following.
1. partA.m This program generates two plots showing the impulse response sequence of the room as well
as its frequency response up to 4 kHz, as shown in Fig. 1.
Impulse response sequence of a room model

Amplitude

0.1
0.05
0
0.05
100

200

300

400

500
600
Tap index

700

800

900

1000

Magnitude response of a room model

Magnitude (dB)

10
0
10
20
30
40
0

500

1000

1500

2000
Hz

2500

3000

3500

4000

Figure VI.1. Impulse and frequency responses of a room model.

2. partB.m This program generates two plots. One plot shows the loudspeaker signal, the echo signal, and
the error signals that are left after adaptation by NLMS using = 0.5 and = 0.1. The second plot
shows the variation of the relative filter mismatch as a function of the step-size. Typical outputs are
shown in Figs. 2 and 3.
It is seen from Fig. 2 that the error signal is reduced faster for = 0.5 than for = 0.1.
3. partC.m This program generates a plot showing the loudspeaker signal, the echo signal, and the error
signals that are left after adaptation by NLMS using = 0.1 and NLMS using power normalization
with = 0.1/32 (see Fig. 4). The mismatch by the former algorithm is found to be 10.49 dB while
the mismatch by the latter algorithm is found to be 9.72 dB. The performance of both algorithms
is therefore similar although, of course, NLMS with power normalization is less complex computationally. Comparing the convergence of the error signal when = 0.1 in the DFT-block adaptive
implementation with that in Fig. 2, we see that the block implementation results in faster convergence.
4. partD.m This program generates a plot showing the loudspeaker signal, the echo signal, and the error
signal that is left after adaptation using NLMS with power normalization and = 0.03/32 see
Fig. 5.
5. partE.m This program generates learning curves for -NLMS and the constrained DFT-block adaptive
implementation of Alg. 28.7 by averaging over 50 experiments and smoothing the resulting curves using
a sliding window of width 10 see Fig. 6, which shows the superior convergence performance of the
block adaptive filter.

46

Part VI: Block Adaptive Filters

Loudspeaker signal

Amplitude

5
0
5
Echo signal

Amplitude

5
0
5

Error signal obtained by using NLMS with stepsize =0.5

Amplitude

5
0
5

Error signal obtained by using NLMS with stepsize =0.1

Amplitude

5
0
5
0

6
Time

10
4

x 10

Figure VI.2. The top row shows the loudspeaker signal, while the second row
shows the corresponding echo and the last two rows show the resulting error
signal for different step-sizes in the NLMS implementation.

Relative filter mismatch vs. stepsize

10.5

Mismatch (dB)

11

11.5

12

12.5

13
0.2

0.4

0.6

0.8

1
Stepsize

1.2

1.4

1.6

1.8

Figure VI.3. Relative filter mismatch for NLMS as a function of the stepsize; the values are computed after training with 20 CSS blocks.

47

Part VI: Block Adaptive Filters

Amplitude

Loudspeaker signal

Echo signal

Amplitude

5
0
5

Error signal using a DFTbased block adaptive filter and powernormalized NLMS

Amplitude

5
0
5
Amplitude

Error signal using a DFTbased block adaptive filter and NLMS with =0.1

0
5
0

Time

x 10

Figure VI.4. Performance of NLMS-based DFT-block adaptive implementations. The top row shows the loudspeaker signal, while the second row shows
the corresponding echo and the last two rows show the error signals.

Louspeaker signal using real speech

Amplitude

1
Echo signal using real speech

Amplitude

1
Error signal using a DFTbased block adaptive filter

Amplitude

1
0

0.5

1.5
Time

2.5
5

x 10

Figure VI.5. Performance of a DFT-block adaptive implementation applied


to a real speech signal. The top row shows the loudspeaker signal, while the
middle row shows the corresponding echo and the last row shows the error
signal left after cancellation.

48

Part VI: Block Adaptive Filters

Learning curves for NLMS and DFTbased block adaptive filters

10
12
14

MSE (dB)

16
NLMS with =0.5

18
20
22
24

DFTbased block
adaptive filter
with =0.1/32

26
28
30
1

3
Iteration

6
4

x 10

Figure VI.6. Learning curves for NLMS and the constrained DFT block
adaptive filters using an AR(1) loudspeaker signal.

PART VII: LEAST-SQUARES


METHODS

49

50

Part VII: Least-Squares Methods

COMPUTER PROJECTS
Project VII.1 (OFDM receiver) The programs that solve this project are the following.
1. partA.m This program generates a plot showing the estimated channel taps both in the time domain
and in the frequency domain for the two training schemes of Fig. 1. A typical output is shown in Figs. 1
and 2.
Exact channel taps

Amplitude

0.5

0
1

Amplitude

0.5

0
1

Taps estimated by using 64 tones over 8 symbols

1
Amplitude

Taps estimated by using 64 tones in one symbol

0.5

0
1

Tap index

Figure VII.1. Magnitude of the channel taps in the time domain: exact
taps (top plot), estimated taps using 64 tones in one symbol (middle plot) and
estimated taps using 64 tones over 8 symbols (bottom plot).

2. partB.m This program generates a plot of the BER as a function of the SNR for both training schemes
of Fig. 1 by averaging over 100 random channels. A typical plot is shown in Fig. 3. The program also
plots the scatter diagrams of the transmitted, received, and equalized data at SNR = 25 dB. A typical
plot is shown in Fig. 4.

51

Part VII: Least-Squares Methods

Exact channel tones

0.2
0.1
0
0

10

20
30
40
50
60
Estimated channel tones using 64 tones in one symbol

70

10

20
30
40
50
60
Estimated channel tones using 64 tones over 8 symbols

70

0.2
0.1
0
0

0.2
0.1
0
0

10

20

30

40

50

60

70

Tone index

Figure VII.2. Magnitude of the channel tones () in the frequency domain:


exact channel tones (top plot), estimated tones using 64 tones in one symbol
(middle plot), and estimated tones using 64 tones over 8 symbols (bottom plot).

BER vs SNR for two training schemes

10

Exact channel
Scheme (a)
Scheme (b)

BER

10

10

10

10
SNR (dB)

12

14

16

18

20

Figure VII.3. BER versus SNR for the two training schemes of Fig. 10.7
using N = 64 (symbol size), P = 16 (cyclic prefix length), and M = 8 (channel
length).

52

Part VII: Least-Squares Methods

Received signal

1
Im

Im

Transmitted contellation

0
1
2
2

0
1

0
Re

2
2

0
1
2
2

0
Re

Estimated sequence using scheme (a)

Im

Im

Estimated sequence using exact channel

0
1

0
Re

2
2

0
Re

Figure VII.4. Scatter diagrams of the transmitted and received sequences


(top row) and the recovered data using the exact channel and training scheme
(a) from Fig. 10.7 (bottom row) at SNR = 25 dB.

53

Part VII: Least-Squares Methods

Project VII.2 (Tracking Rayleigh fading channels) The programs that solve this project are the following.
1. partA.m This program generates two plots. One plot shows the ensemble-average learning curves for
-NLMS, RLS, and extended RLS see Fig. 5. With the values chosen for fD and Ts , the values
of and q turn out to be = 0.99999999936835 and q = 1.263309457044670 109 . That is, is
essentially equal to one and q is negligible so that the time-variation in woi is slow. For this reason,
there is practically no difference in performance between RLS and its extended version. In addition, in
this situation, although RLS and extended RLS converge faster than -NLMS, the steady-state tracking
performance for all algorithms is similar. Moreover, recall from Table 21.2 that the steady-state MSE
performance of -NLMS and RLS can be approximated by


1
v2
1
MSELMS =
Tr(Ru )E
+
Tr(Ru )Tr(Q)
2
kui k2
2
MSERLS =

v2 (1 )M +

1
Tr(QRu )
(1)

2 (1 )M

where, in our case, Q = qI, Ru = I, M = 5, v2 = 0.001, = 0.995. These expressions result in the
theoretical values MSELMS 28.95dB and MSERLS 29.77dB, where the expectation E (1/kui k2 ) is
evaluated numerically via ensemble-averaging. The values obtained from simulation (by averaging the
last 100 values of the ensemble-average error curves) are in good match with theory, namely, MSELMS
29.10dB and MSERLS 28.57dB (from simulation). In Fig. 6, we also plot ensemble-average curves
e i k2 . It is again seen that the steady-state performances are similar
for the mean-square deviation, E kw
and they tend to approximately 70 dB.
Ensembleaverage learning curves for tracking a Rayleigh
fading channel using NLMS, RLS, and extended RLS.

10
5

MSE (dB)

0
5
NLMS

10
15

RLS and
extended RLS

20
25
30
100

200

300

400

500
Iteration

600

700

800

900

1000

Figure VII.5. Ensemble-average learning curves for NLMS, RLS, and extended RLS when used in tracking a Rayleigh fading multipath channel with
fD = 10Hz and Ts = 0.8s.

2. partB.m This program generates two plots for the case fD = 80Hz. One plot shows the ensemble-average
learning curves for -NLMS, RLS, and extended RLS see Fig. 7. With the values chosen for fD and
Ts , the value of is still close to one and the value of q is still small, namely, = 0.99999995957410 and
q = 8.085179670214160 108 . For this reason, there is still practically no difference in performance
between RLS and its extended version. Now, however, it is observed that the steady-state performance
of -NLMS is superior:
MSELMS 28.34 dB,

MSERLS 16.73 dB

(simulation)

e i k2 .
In Fig. 8, we also plot ensemble-average curves for the mean-square deviation, E kw

3. partC.m This program generates two plots. One plot shows the ensemble-average learning and meansquare error curves for -NLMS, RLS, and extended RLS for the case of = 0.95 and q = 0.1 see

54

Part VII: Least-Squares Methods


Meansquare deviation ensembleaverage learning curves for tracking a
Rayleigh fading channel using NLMS, RLS, and extended RLS

20

MSD (dB)

20
NLMS

40
RLS and
extended RLS

60

80
100

200

300

400

500
Iteration

600

700

800

900

1000

Figure VII.6. Ensemble-average mean-square deviation curves for NLMS,


RLS, and extended RLS when used in tracking a Rayleigh fading multipath
channel with fD = 10Hz and Ts = 0.8s.

Ensembleaverage curves for tracking a Rayleigh fading channel


at maximum Doppler frequency fD=80 Hz

10
5

MSE (dB)

0
5
10
NLMS

15

RLS and
extended RLS

20
25

100

200

300

400

500
Iteration

600

700

800

900

1000

Figure VII.7. Ensemble-average learning curves for NLMS, RLS, and extended RLS when used in tracking a Rayleigh fading multipath channel with
fD = 80Hz and Ts = 0.8s.

Fig. 9, while the other shows the same curves for the case = 1 and q = 0.1 see Fig. 10. It is seen
from these figures that the performance of extended RLS is now superior.

55

Part VII: Least-Squares Methods

Meansquaredeviation ensembleaverage learning curves for tracking


a Rayleigh fading channel at maximum Doppler frequency fD = 80 Hz

30
20
10

MSD (dB)

0
10
20
NLMS

30
RLS and
extended RLS

40
50
60
100

200

300

400

500
Iteration

600

700

800

900

1000

Figure VII.8. Ensemble-average mean-square deviation curves for NLMS,


RLS, and extended RLS when used in tracking a Rayleigh fading multipath
channel with fD = 80Hz and Ts = 0.8s.
Mode of nonstationary model at =0.95

Mode of nonstationary model at =0.95

50
MSD (dB)

MSE (dB)

20
RLS

10
0
extended
RLS

200

40
30
20

NLMS

RLS

10
NLMS

400 600
Iteration

800

0
1000

extended RLS

200

400 600
Iteration

800

1000

Figure VII.9. Ensemble-average learning (left) and mean-square deviation


(right) curves for NLMS, RLS, and extended RLS when used in tracking a
multipath channel with = 0.95 and q = 0.1.

Mode of nonstationary model at =1

Mode of nonstationary model at =1

60

RLS

MSD (dB)

MSE (dB)

20
10

40

RLS

NLMS

20

0
extended
RLS

200

NLMS

400 600
Iteration

800

0
1000

extended RLS

200

400 600
Iteration

800

1000

Figure VII.10. Ensemble-average learning (left) and mean-square deviation


(right) curves for NLMS, RLS, and extended RLS when used in tracking a
multipath channel with = 1 and q = 0.1.

PART VIII: ARRAY


ALGORITHMS

56

57

Part VIII: Array Algorithms

COMPUTER PROJECT
Project VIII.1 (Performance of array implementations in finite precision) The program that solves
this probject is the following.
1. partA.m The program generates ensemble-average learning curves for RLS and its QR array variant using
three choices for the number of bits, {30, 25, 16} and averaging over 100 experiments. A typical output
is shown in Fig. 1. It is seen that the QR array method is superior in finite-precision implementations.
For instance, at 16 bits, RLS fails while the QR method still functions properly.
RLS algorithm (30 bits)

QR algorithm (30 bits)

10
MSE (dB)

MSE (dB)

10
0
10

0
10
20

20

30

30
50

100

150

200

50

RLS algorithm (25 bits)

150

200

QR algorithm (25 bits)

20

10
MSE (dB)

10
MSE (dB)

100

0
10

0
10
20

20

30

30
50

100

150

200

50

RLS algorithm (16 bits)

100

150

200

QR algorithm (16 bits)

10
30
MSE (dB)

MSE (dB)

0
20
10

10
20

0
30
50

100
Iteration

150

200

50

100
Iteration

150

200

Figure VIII.1. Ensemble-average learning curves for RLS and its QR array
variant for three choices of the number of quantization bits.

PART IX: FAST RLS


ALGORITHMS

58

59

Part IX: Fast RLS Algorithms

COMPUTER PROJECT
Project IX.1 (Stability issues in fast least-squares) The programs that solve this project are the following.
1. partA.m This program generates ensemble-average learning curves for four fast fixed-order implementations. A typical output is shown in Fig. 1. It is seen that in floating-point precision, the algorithms
behave rather similarly.
FTF algorithm

FTF with rescue mechanism

0
MSE (dB)

MSE (dB)

10

20

Floating
point

30

10
Floating
point

20

30
200

400 600
Iteration

800

1000

200

Stabilized FTF

800

1000

Fast array algorithm

0
MSE (dB)

MSE (dB)

400 600
Iteration

10
Floating
point

20

30

10
Floating
point

20

30
200

400 600
Iteration

800

1000

200

400 600
Iteration

800

1000

Figure IX.1. Ensemble-average learning curves for four fast fixed-order filter
implementations in floating-point precision.

2. partB.m This program generates ensemble-average learning curves for four fast fixed-order filters using
quantized implementations with 36 bits for both signals and coefficients. A typical output is shown in
Fig. 2. It is seen that, even at this number of bits, the standard FTF implementation becomes unstable.
When implemented with the rescue mechanism (39.10), the rescue procedure is called upon on several
occasions but it still fails to recover filter performance. The stabilized FTF implementation also fails in
this experiment. Only the fast array implementation is able to maintain the original filter performance
in this case.

60

Part IX: Fast RLS Algorithms

FTF algorithm

FTF with rescue mechanism

100

50

MSE (dB)

MSE (dB)

10

36 bits

0
10
20
36 bits

200

400 600
Iteration

800

1000

200

Stabilized FTF

800

1000

Fast array algorithm

0
MSE (dB)

20
MSE (dB)

400 600
Iteration

10
36 bits

20

20
36 bits

200

400 600
Iteration

30
800

1000

200

400 600
Iteration

800

1000

Figure IX.2. Ensemble-average learning curves for four fast fixed-order filter
implementations averaged over 100 experiments in a quantized environment
with 36 bits.

PART X: LATTICE FILTERS

61

62

Part X: Lattice Filters

COMPUTER PROJECT
Project X.1 (Performance of lattice filters in finite precision) The programs that solve this project
are the following.
1. partA.m This program generates ensemble-average learning curves for the various lattice filters for
different choices of the number of bits. The results are shown in Figs. 1 through 5. All filters work well
at 35 bits, but some filters start facing difficulties at lower number of bits. It seems from the figures
that the array lattice form is the most reliable in finite precision, while the a posteriori lattice form
with error feedback is the least reliable.
B=35 bits
A priori lattice

10

MSE (dB)

MSE (dB)

A posteriori lattice

10

10
30

Iteration

Iteration

A priori w/ error feedback

A posteriori w/ error feedback

10

MSE (dB)

MSE (dB)

30

10

10

150
100
50

30
Iteration

Iteration
Array lattice

10

MSE (dB)

MSE (dB)

Normalized lattice

10
30

10
10
30

50

100
Iteration

150

200

50

100
Iteration

150

200

Figure X.1. Ensemble-average learning curves for various lattice implementations using 35 bits.

2. partB.m This program generates ensemble-average learning curves for the various lattice filters in the
presence of an impulsive interference at iteration i = 200 and assuming floating-point arithmetic.
The result is shown in Fig. 6. It is seen that all algorithms recover from the effect of the impulsive
disturbance.
3. partC.m This program generates ensemble-average learning curves for the various lattice filters in the
presence of an impulsive interference at iteration i = 200 and assuming finite-precision arithmetic with
B = 20 and B = 16 bits. The results are shown in Figs. 7 and 8.

63

Part X: Lattice Filters

A posteriori lattice

B=25 bits

A priori lattice

100

MSE (dB)

MSE (dB)

30

50
0

10
10
30

Iteration

Iteration

A priori w/ error feedback

A posteriori w/ error feedback

MSE (dB)

MSE (dB)

30
10
10

100
50
0

30
Iteration

Iteration

normalized lattice

Array lattice

30
MSE (dB)

MSE (dB)

30
10
10
30

10
10
30

50

100
Iteration

150

200

50

100
Iteration

150

200

Figure X.2. Ensemble-average learning curves for various lattice implementations using 25 bits.

A posteriori lattice

B=20 bits

MSE (dB)

MSE (dB)

100
50
0

10
10
30

30
Iteration

Iteration

A priori w/ error feedback

A posteriori w/ error feedback

100

10

MSE (dB)

MSE (dB)

A priori lattice

10

50
0

30
Iteration

Iteration

Array lattice

Normalized lattice

MSE (dB)

MSE (dB)

100
50
0

10
10
30

30
50

100
Iteration

150

200

50

100
Iteration

150

200

Figure X.3. Ensemble-average learning curves for various lattice implementations using 20 bits.

64

Part X: Lattice Filters

A posteriori lattice

B=16 bits

A priori lattice

80
MSE (dB)

MSE (dB)

80
40

40

0
Iteration

Iteration

A priori w/ error feedback

A posteriori w/ error feedback

80
MSE (dB)

MSE (dB)

80
40

40
0

0
Iteration

Iteration

Normalized lattice

Array lattice

MSE (dB)

MSE (dB)

80
40
0
30

10
10
30

50

100
Iteration

150

200

50

100
Iteration

150

200

Figure X.4. Ensemble-average learning curves for various lattice implementations using 16 bits.

B=10 bits
A priori lattice

A posteriori lattice

40
MSE (dB)

MSE (dB)

40
20
0
20

0
20

Iteration

Iteration

A priori w/ error feedback

A posteriori w/ error feedback

MSE (dB)

40
MSE (dB)

20

20
0
20

40
20
0
20

Iteration

Iteration

Normalized lattice

20
0
20
40

50

100
Iteration

150

Array lattice

40
MSE (dB)

MSE (dB)

40

200

20
0
20
40

50

100
Iteration

150

200

Figure X.5. Ensemble-average learning curves for various lattice implementations using 10 bits.

65

Part X: Lattice Filters

A priori lattice

MSE (dB)

MSE (dB)

A posteriori lattice

10
10

30
Iteration

A priori w/ error feedback

A posteriori w/ error feedback

10
10
30

10
10
30

Iteration

Iteration

Normalized lattice

Array lattice

10

MSE (dB)

MSE (dB)

10

Iteration

MSE (dB)

MSE (dB)

30

10

10
30

10
10
30

100

200 300
Iteration

400

500

100

200 300
Iteration

400

500

Figure X.6. Ensemble-average learning curves for various lattice implementations in floating-point precision with an impulsive disturbance occurring at
iteration i = 200.

B=20 bits
A priori lattice

20

MSE (dB)

MSE (dB)

20

10

40

A priori w/ error feedback

10

40
100

200
Iteration

300

100

Normalized lattice

200
Iteration

300

Array lattice

20

MSE (dB)

MSE (dB)

80

20

10

40

40
100

200
Iteration

300

100

200
Iteration

300

Figure X.7. Ensemble-average learning curves for various lattice implementations in 20-bits precision with an impulsive disturbance occurring at iteration
i = 200.

66

Part X: Lattice Filters

B=10 bits

A priori lattice

MSE (dB)

MSE (dB)

30

10

10

A priori w/ error feedback

30

10

10
Iteration

Iteration
Array lattice

Normalized lattice

40

40
MSE (dB)

MSE (dB)

20
20

0
20

20
100

200 300
Iteration

400

500

40

100

200 300
Iteration

400

500

Figure X.8. Ensemble-average learning curves for various lattice implementations in 10-bits precision with an impulsive disturbance occurring at iteration
i = 200.

PART XI: ROBUST FILTERS

67

68

Part XI: Robust Filters

COMPUTER PROJECT
Project XI.1 (Active noise control) The programs that solve this project are the following.
1. partA.m This program generates a plot of the function g(), which is shown in Fig. 1. The plot on the
right zooms on the interval [0, 0.1]. It is seen that g() < 1 for < 0.062. Also, g() is minimized
at = 0.031. Note however that for all values of in the range (0, 0.062), the function g() is
essentially invariant and assumes values very close to one.
1.0006
1.3
g()

g()

1.0004
1.2
1.1

1.0002
1

1
0

0.2

0.4

0.6

0.8

0.9998
0

0.05

0.1

Figure XI.1. A plot of the function g() (left) and a zoom on the interval
(0, 0.062).

e i k2 ) for
2. partB.m This program generates ensemble-average mean-square deviation curves (i.e., E kw
several values of using binary input data. A typical output is shown in Fig. 2.
Learning curves of FeLMS for different values of (i.e., different stepsizes)

10

20
=0.01

MSD (dB)

30

40

=0.02
=0.03

50

=0.05

60
=0.1

70

0.5

1.5
Iteration

2.5

3
4

x 10

Figure XI.2. Ensemble-average mean-square deviation curves of the filterederror LMS algorithm for various choices of and for binary input data.

3. partC.m This program generates ensemble-average learning curves for the filtered-error algorithm for
several values of and using both binary input and white Gaussian input. Typical outputs are shown
in Figs. 34. It is observed that the algorithm shows unstable behavior for values of beyond 0.5.
4. partD.m This program generates ensemble-average learning curves for four algorithms with = 0.15. A
typical output is shown in Fig. 5. It is seen that NLMS shows best convergence performance, followed
by filtered-error LMS, modified filtered-x LMS and filtered-x LMS. It is also seen that the modification
to FxLMS improves convergence.

69

Part XI: Robust Filters

Learning curves of FeLMS for different values of and using white binary input

40

=0.6

20

MSE (dB)

=0.05

20

=0.1

40

=0.5
=0.2

60
1000

2000

3000

4000

5000

Iteration

Figure XI.3. Ensemble-average learning curves of filtered-error LMS for various choices of and for both binary input.

Learning curves of FeLMS for different values of and using white Gaussian input

60
=0.6

MSE (dB)

40

20

0
=0.05

20
=0.1

=0.5

40
=0.2

60
1000

2000

3000

4000

5000

Iteration

Figure XI.4. Ensemble-average learning curves of filtered-error LMS for various choices of and for Gaussian input.

70

Part XI: Robust Filters

Learning curves of FeLMS, FxLMS, and mFxLMS using white Gaussian input

10

MSE(dB)

20

30
FxLMS

40

mFxLMS
FeLMS

50
NLMS

60
1000

2000

3000

4000

5000

Iteration

Figure XI.5. Ensemble-average learning curves of NLMS, FeLMS, FxLMS and


mFxLMS for = 0.15 and white Gaussian input.

You might also like