You are on page 1of 5

1

2
MATLAB simulation exercises in statistical signal
processing

September 22, 2021


2

[1]
[a] For p = 2, 3, 4 simulate an AR(p) process using the algorithm
x[n] = −a[1]x[n − 1] − ... − a[p]x[n − p] + w[n]
2
where {w[n]} is an iid N (0, σw ) sequence with σw = 0.1 and the AR coefficients
a[k], k = 1, 2, ..., p are chosen so that the roots of the polynomial
A(z) = 1 + a[1]z −1 + ... + a[p]z −p
fall inside the unit circle. Use the ”poly” command to generate the AR polyno-
mial for a specified set of zeroes and use the ”root” command to calculate the
roots of a polynomial for a specified set of coefficients.

[b] Write a program to calculate the autocorrelation estimates


−τ −1
NX
1
R̂xx (τ ) = . x[n + τ ]x[n]
N − τ n=0

for τ = 0, 1, ..., M and R̂xx (−τ ) = R̂xx (τ ), τ = 1, 2, ..., M where N = 100 is


the number of generated samples of the process and M << N , say M = 20.
Apply the Levinson-Durbin algorithm to calculate the prediction coefficients of
the process upto order p. Draw the Lattice filter realization and compare with
the lattice filter obtained using the exact statistics of the process, ie, compare
the estimated reflection coefficients and the prediction error energies of orders
upto p obtained using R̂xx (τ ) with those obtained using R̂xx (τ ). Try to design
a perturbation theoretic method for calculating the approximate shift in the
reflection coefficients and prediction energies when estimated correlations are
used in place of true correlations. Your answer must be expressed in terms of
the the shift
δRxx (τ ) = R̂xx (τ ) − Rxx (τ )

[2] Generate an ARM A(p, q) process with p = 3, q = 5 according to the


algorithm
p
X q
X
x[n] + a[k][n − k] = b[k]δ[n − k]
k=1 k=0
where the roots of the polynomial
p
X
A(z) = 1 + a[k]z −k
k=1

all fall inside the unit circle. Here, δ[n] is the unit impulse function. Estimate
{a[k]} by minimizing
N
X p
X
E({a[k]}pk=1 ) = (x[n] + a[k]x[n − k])2
n=max(p,q+1) k=1
3

and then estimate {b[k]} using these estimated AR coefficients in the equation
p
X
b[n] = x[n] + a[k]x[n − k], n = 0, 1, ..., q
k=1

[3] Generate an AR(p) process {x[n]} as in problem [1]. Generate the desired
process,
d[n] = c[1]x[n] + c[2]x[n − 1] + .. + c[q]x[n − q] + v[n]
where {v[n]} is iid N (0, σv2 ) independent of {w[n]}. Take for example p = 3, q =
2.
[a] Implement the LMS algorithm for estimating d[n] linearly based on
x[n], x[n − 1], ..., x[n − r], ie, write
r
X
ˆ =
d[n] ˆ
hk [n]x[n − k], e[n] − d[n] − d[n],
k=0


hk [n + 1] = hk [n] − µ e[n]2
∂hk [n]
= hk [n] + 2µe[n]x[n − k], k = 0, 1, ..., r
Plot the error process e[n] and estimate its limiting mean square value using
time averages. Now assuming that the vector process
X[n] = [x[n], x[n − 1], ..., x[n − r]]T , n = r, r + 1, ...
are independent, carry out the standard convergence analysis of the LMS algo-
rithm and determine the limiting mean square error of e[n]. Compare this with
the estimated one.

[b] Cast the AR filter model and the measurement model for d[n] in standard
state variable form and design a Kalman filter for estimating the state, ie, ξ[n] =
[x[n−1], ..., x[n−p]]T from the measured data d[k], k ≤ n. Also design the causal
Wiener filter and compare the asymptotics of the two.

[4] Generate reference signals


x1 [n] = cos(ωn), x2 [n] = sin(ωn)
Generate a signal y[n] = Acos(ωn + φ) + B.cos(θn + ψ) + v[n] with θ 6= ω
and v[n] white Gaussian noise. Design an adaptive notch filter based on the
LMS algorithm to cancel out the frequency component at ω from y[n] using the
reference signals. Specifically, the notch filter is defined by the equations
e[n] = y[n] − ŷ[n], ŷ[n] = h1 [n]x1 [n] + h2 [n]x2 [n],

hk [n + 1] = hk [n] − µ. e[n]2 =
∂hk [n]
hk [n] + 2µe[n]xk [n], k = 1, 2
Plot the output e[n] of this adaptive notch filter. Also plot its FFT and check
that the frequency component at ω has been cancelled out.

You might also like