You are on page 1of 7

Introduction

• Introduction to adaptive filtering - problem.


• Wiener-Hopf optimal filter.
Adaptive Digital Filters • LMS update algorithm.
• Implementation of adaptive filter using the LMS
algorithm.
• Application Examples.
• Results.

Why use adaptive filters? Why use adaptive filters?


• Ordinary FIR and IIR filters are suitable for the problems • Problem would occur if the frequency content of the
where useful signal and noise component occupy two noise varies with time.
separate frequency bands.

Bandstop filter can be designed Designed bandstop filter would


to filter out the noise component not be as useful.

F F

noise noise
useful signal useful signal

Why use adaptive filters? Adaptive Filters


• Or even worse, if two frequency bands overlap. • Adaptive filters differ from other filters such as FIR and
IIR in the sense that:
– The coefficients are not determined by a set of desired
specifications.
– The coefficients are not fixed.
Designed bandstop filter would
now be completely useless. • With adaptive filters the specifications are not known
and change with time.
• Applications include: process control, medical
F
instrumentation, speech processing, echo and noise
calculation and channel equalisation.

noise
useful signal
Adaptive filters can help in those situations.

Adaptive Filters Problem


• Basic idea in adaptive filtering is that the frequency d(n)
+
response of the filter can be altered according to some d(n)
Signal
+ x(n) y(n) -
criterion to adapt the filter better to incoming signal Σ Digital Filter Σ
Distorting System
+
• To construct an adaptive filter the following selections e(n)
u(n) w(n) h(n)
have to be made:
– Which method/algorithm to use to update the coefficients of the
selected filter.
• Design a digital filter H(z) with impulse response h(n) in order
– Whether to use an FIR or IIR filter.
to compensate for signal distortion by system U(z) and noise
introduced by signal w(n).
• Estimated signal, filter ouptut y(n) should then be as close as
possible to the original, information carrying signal d(n)
• d(n) is therefore also known as desired signal.
• Estimation error is e(n)=d(n)-y(n)
Solution Adaptive Filter Structure
d(n)
+
s(n) y(n) + x(n) d’(n) -
Signal
Σ Digital Filter Σ d(n)
Distorting System +
+ x(n) y(n) -
w(n) h(n)
e(n) Digital Filter Σ

Adaptive h(n) e(n)


Algorithm
Adaptive
Definitions of signals:
Algorithm
• h(n) can be designed in “one go” (Wiener filter) or through an x(n) – input signal
y(n) – filter output
iterative process (Adaptive filter). d(n) – desired signal
• We will investigate the first approach initially, but then modify e(n) – error signal

Wiener solution in order to obtain adaptive filter.


h(n) – set of adaptive coefficients (changes for every n)

Adaptive Algorithm Mathematical Analysis


N −1
• The real challenge for designing an adaptive filter d(n) y ( n) = ∑ h( k ) ⋅ x ( n − k ) Filter output
+ k =0
resides with the adaptive algorithm. x(n) y(n) -
Digital Filter Σ
• The algorithm needs to have the following properties: e( n ) = d ( n ) − y ( n ) Error signal
e(n)
– Practical to implement. h(n)
J = E ⎡⎣e 2 ( n ) ⎤⎦ Mean-square error (MSE)
– Adapt the coefficients quickly. Adaptive
– Provide the desired performance. Algorithm
The aim of the adaptive algorithm is to adapt
• Some well known adaptive algorithms filter coefficients in order to minimise the MSE.
– Least Mean Square (LMS) Using vector notation:

– Recursive Least Squares (RLS)


⎡ x ( n) ⎤ ⎡ h(0) ⎤
– Kalman filter ⎢ x(n − 1) ⎥ ⎢ h(1) ⎥
x(n) = ⎢ ⎥ h=⎢ ⎥ y ( n) = h T x( n) = x( n) T h
⎢ # ⎥ ⎢ # ⎥
⎢ ⎥ ⎢ ⎥
⎢⎣ x( n − N + 1) ⎥⎦ ⎢⎣ h( N − 1) ⎥⎦

Mathematical Analysis Mathematical Analysis


E[e 2 (n)] = E[[d (n) − y (n)]2 ]
• To find the minimum error take the derivative with respect to the
= E[[d (n) − h T x(n)]2 ] coefficients, h(k), and set equal to zero.
= E[d 2 (n) − 2d (n) ⋅ x(n)T h + h T x(n) ⋅ x(n)T h]
= E[d 2 (n)] − 2 E[d (n) ⋅ x(n)T ] ⋅ h + h T ⋅ E[x(n) ⋅ x(n)T ] ⋅ h] ∂E[e 2 ( n)]
= −2R dx
T
+ 2R xx h = 0
= Pd +2R dx
T
h + h T R xx h ∂h(k )

−1
• Solving for h: h = R xx R dx Wiener-Hopf Equation
Pd =E[ d 2 (n)] Scalar

d(n) • Wiener Hopf Equation therefore determines the set of optimal filter
+
x(n) y(n) -
R dx = E[d (n) ⋅ x(n)T ] coefficients in the mean-square sense.
Digital Filter Σ Cross-correlation row vector

e(n)
• The filter that has these optimal coefficients is often called the
h(n)
R xx = E[ x(n) ⋅ x(n)T ] Autocorrelation matrix “Wiener filter” after the pioneering work of Wiener in 1940s.
Adaptive • This result is usually not practical to compute since the signal
Algorithm statistics are not usually known and it involves a matrix
inversion which is difficult and time-consuming to calculate.

MSE Surface – search for MMSE The LMS Update Algorithm


MMSE = Minimum Mean Square Error
• E[e2(n)] represents the • The Least Mean Square method iteratively estimates the solution to
expected value of squared the Weiner-Hopf equation using a method called gradient descent.
filter error e(n), i.e. mean-
square error (MSE). • This minimization method finds a minima by estimating the gradient.
• For the N coefficients filter • The basic equation in gradient descent is:
this is an N dimensional
surface with Wiener-Hopf hn ( k ) = hn −1 ( k ) + μ∇ h E {e 2 ( n )}
solution positioned at the
bottom of this surface (i.e.
this is the minimum error Step size parameter Gradient vector that makes
hn(k) approach hopt
point)
• We can plot it for the case of
2-coefficient filter (more than • The LMS method uses this technique to compute new coefficients
that - impossible to draw in that minimize the difference between the computed output and the
2D plot). expected output under a few key assumptions about the nature of
the signal.
Search for the MMSE using gradient The Steepest Descent Algorithm
• Choose the initial point
(filter coefficients) on the hn ( k ) = hn −1 ( k ) + μ∇ h E {e 2 ( n )}
surface and calculate
gradient vector at that
point • Notice that the expression for the gradient has already been
obtained in the process of calculating Wiener filter coefficients, i.e.:
• Now follow the exactly
opposite direction (this
is the steepest descent ∂E[e 2 (n)]
∇h = = −2R dx
T
+ 2R xxh
adaptation method) ∂h(k )

• μ is the size of step


taken during this • This is therefore of little help in our search for more efficient
descent (i.e. adaptive solutions – coefficients are now determined iteratively, but we still
step size) – not too big need to estimate autocorrelation matrix Rxx and crosscorrelation
vector Rdx (and for every iteration step !!!)
(unstable!) but not too
• No matirx inversion, though but more simplification is needed.
small (slow!)

The LMS Update Algorithm Applications


• The LMS method assumes that the expected value of the signal equals the • Before looking into details of Matlab implementation of
instantaneous error.
LMS update algorithm, some practical applications for
• Using this assumption, the gradient can be estimated as:
adaptive filters are considered first.
Exact gradient • Those are:
∇ h E {e 2 ( n )} = −2 ⋅ E ⎡⎣e ( n ) x ( n − k ) ⎤⎦
– System Identification
– Inverse System Estimation
Gradient estimate for LMS (instead of expected values - use current ones, – Adaptive Noise Cancellation
i.e. get rid off E[ ])
– Linear Prediction
∇ h E {e 2 ( n )} ≈ −2 ⋅ e ( n ) x ( n − k )

• The equation to update filter coefficients is therefore:

hn ( k ) = hn −1 ( k ) + μ e ( n ) x ( n − k )

Applications: System Identification Applications: System Identification


Unknown
System • The unknown system is placed in parallel with the adaptive filter.
d(n) • Adaptive algorithm drives e(n) towards zero.
• When e(n) is very small, the adaptive filter response is close to the
+
x(n) y(n) - response of the unknown system. In this case the same input feeds
Digital Filter Σ both the adaptive filter and the unknown system.
• Using this configuration, adaptive filters can be used to identify an
e(n) unknown system, such as the response of an unknown communications
h(n)
channel or the frequency response of an auditorium.

Adaptive
Algorithm

Definitions of signals:
x(n) – input applied to unknown system and adaptive filter
y(n) – filter output
d(n) – system (desired) output
e(n) – estimation error Identifying the response of the unknown system.

Applications: Inverse Estimation Applications: Inverse Estimation


Delay • By placing the unknown system in series with adaptive filter, filter
d(n) adapts to become the inverse of the unknown system as e(k) becomes
very small.
+
x(n) x(n) y(n) - • As shown in the figure the process requires a delay inserted in the
System Digital Filter Σ desired signal d(n) path to keep the data at the summation point
synchronized.
e(n) • Adding the delay keeps the system causal.
h(n)

Adaptive
Algorithm

Definitions of signals:
x(n) – input applied to system
y(n) – filter output
d(n) – desired output
e(n) – estimation error Estimating the inverse of the system.
Applications: Noise Cancellation Applications: Noise Cancellation
d(n)
Signal • In noise cancellation configuration, adaptive filter removes noise from a
source signal in real time.
• Desired signal combines noise and desired information.
+
x(n) y(n) - • To remove the noise, a signal x(n) which represents noise that is
Digital Filter Σ correlated to the noise to remove from the desired signal is filtered
through the adaptive filter.
Noise • So long as the input noise to the filter remains correlated to the
source h(n) e(n)
unwanted noise accompanying the desired signal, the adaptive filter
adjusts its coefficients to reduce the value of the difference between
Adaptive y(n) and d(n), removing the noise and resulting in a clean signal in e(n).
Algorithm • Notice that in this application, the error signal actually converges to the
input data signal, rather than to zero.
Definitions of signals:
x(n) – noise
y(n) – noise estimate
d(n) – signal + noise
e(n) – signal estimate Removing background noise from the useful signals

Applications: Linear Predictor Applications: Linear Predictor


• Assuming that the signal x(n):
d(n) • is periodic
• Is steady or varies slowly over time
+
x(n) x(n) y(n) - • the adaptive filter will can be used to predict the future values of the
AR Process Delay Digital Filter Σ desired signal based on past values.

e(n) • When x(n) is periodic and the filter is long enough to remember previous
h(n)
values, this structure with the delay in the input signal, can perform the
prediction.
Adaptive • This configuration can also be used to remove a periodic signal from
Algorithm stochastic noise signals.

Definitions of signals:
x(n) – signal to be predicted
y(n) – filter output (signal prediction)
d(n) – desired output
e(n) – estimation error Estimating the future samples of the signal.

Adaptive LMS Algorithm Adaptive LMS Algorithm


Implementation Implementation
• LMS algorithm can easily be implemented in • Next slide shows the Matlab function developed to
software. Main steps of this algorithm are:
implement LMS algorithm
1. Read in the next sample, x(n), and perform N −1
the filtering operation with the current version y (n) = ∑ hn (k ) x( n − k )
of the coefficients. k =0

2. Take the computed output and compare it N −1


with the expected output, i.e. calculate the e( n ) = d ( n ) − y ( n ) y (n) = ∑ hn (k ) x( n − k )
error. k =0

3. Update the coefficients (obtain the next set of


coefficients) using the following computation. hn +1 ( k ) = hn (k ) + μ e(n) x( n − k ) e( n ) = d ( n ) − y ( n )

• This algorithm is performed in a loop so that hn +1 ( k ) = hn ( k ) + μ e( n) x ( n − k )


with each new sample, a new coefficient
vector, hn+1(k) is created.
• In this way, the filter coefficients change and
adapt.

LMS algorithm Implementation LMS algorithm Implementation


Variables and Parameters used in the function
Matlab Function
INPUT
function [y,e,c] = lms(x,d,N,mu)
% [y,e,c] = lms(x,d,N,mu) x vector containing the samples of the input signal x[n]
% Adaptive FIR filter using LMS adaptive algorithm size(x) = [xlen,1] ... column vector
M = max(size(x)); d vector containing the samples of the desired output signal d[n]
y = zeros(1,M); size(d) = [xlen,1] ... column vector
e = zeros(1,M); N number of coefficients
c = zeros(N,1); mu step-size parameter
w = zeros(1,N); % initialise weight vector
for n = N:M; OUTPUT
xN = x(n:-1:n-(N-1));
y(n) = w * xN'; y vector containing the samples of the output signal y[n]
e(n) = d(n) - y(n); size(y) = [xlen,1] ... column vector
w = w + mu * e(n) * xN;
e vector containing the samples of the error signal e[n]
c = [c w'];
size(e) = [xlen,1] ... column vector
end
c matrix containing the coefficient vectors c[n]
size(c) = [N,xlen+1]
LMS algorithm Implementation LMS algorithm Implementation
Adaptive Noise Cancellation
Matlab Function function [y,e,c] = lms(x,d,N,mu)
% [y,e,c] = lms(x,d,N,mu)
% Adaptive FIR filter using LMS adaptive algorithm
M = max(size(x));
y = zeros(1,M);
e = zeros(1,M);
c = zeros(N,1);
w = zeros(1,N); % initialise weight vector
for n = N:M;
xN = x(n:-1:n-(N-1));
y(n) = w * xN';
e(n) = d(n) - y(n);
w = w + mu * e(n) * xN;
c = [c w'];
end

Let’s have a look at some results produced with this algorithm


in Active Noise Cancellation (ANC) application (complete
Matlab code is available in attachment to this lecture).

LMS algorithm Implementation LMS algorithm Implementation


Adaptive Noise Cancellation Adaptive Noise Cancellation

LMS algorithm Implementation LMS algorithm Implementation


Adaptive Noise Cancellation Adaptive Noise Cancellation – adaptive step size too small

LMS algorithm Implementation LMS algorithm Implementation


Adaptive Noise Cancellation – adaptive step size OK Adaptive Noise Cancellation – adaptive step size too big – instability
Applications: Active Noise Control Applications: Active Noise Control
(Noise Cancellation with a Twist) (Noise Cancellation with a Twist)
x(n) - reference (input) signal • Desired signal d(n) in this application is the signal from the noise
e(n) - error signal source (at the position of the error microphone).
y(n) - output signal
• Adaptive algorithm will try to minimise error by making y(n) exactly
Noise
cancelling opposite of d(n) (same amplitude but opposite phase, actually).
speaker
source • The phase describes the relative position of the wave in its rising and
silent
∼λ/2 zone falling cycle.
• If two waves are in phase, they rise and fall together, whilst if they are
reference error
microphone microphone exactly out of phase, one is rises as the other falls, and so they
cancel out.
y(n)
x(n) ANC • Two exactly opposite sounds will therefore interfere destructively at
Controller * the position of error microphone, resulting in a cancelling of overall
e(n) amplitude.

* ANC Controller = Adaptive Filter + Adaptive Algorithm

Active Noise Control


Secondary Path Estimation Problem
FXLMS instead of LMS Algorithm
FXLMS = Filtered x LMS

x(n) d(n) + e(n) Cancelling Error


P(z) Σ Loudspeaker Microphone H(z)
Acoustic
+ Path
ANC y(n) y’(n) needs to
controller Power
W(z) H(z) Preamplifier be
Amplifier
accurately
Reconstruction Reconstruction estimated
x’(n)
C(z) LMS
Filter Filter to keep
FXLMS
DAC ADC stable
x(n) – input (reference) signal
d(n) – desired signal
P(z) – primary path y(n) – output signal y(n) e(n)
H(z) – secondary path y’(n) – filtered output signal
C(z) – estimate of the secondary path x’(n) – filtered (x) input signal
W(z) – adaptive filter e(n) – error signal

Secondary Path Estimation Using


ANC – Some Results
Adaptive Filtering and LMS

White x(n) d(n) + e(n)


Noise H(z) Σ
Generator
-
x(n)
y(n)
C(z)

LMS

ANC – Some Results ANC – Some Results

Only the tail of the true impulse response is not estimated correctly.
(longer adaptive filter, perhaps … but more computations, so run with this)
ANC – Some Results ANC – Some Results

(No need to estimate this)

ANC – Some Results ANC – Some Results

LMS Algorithm for image processing

• Adaptive filtering can also be applied to 2D signal


processing
• Example: Image noise reduction using LMS algorithm

Noisy Greyscale Image After passing through 2D LMS Adaptive Filter

vila sa sumom vila lms1

50 50

100 100

150 150

200 200

250 250

50 100 150 200 250 300 350 50 100 150 200 250 300 350

You might also like