You are on page 1of 52

Sherif H.

El-Gohary , Phd
sherigoal@gmail.com
Introduction
Linear filters :
the filter output is a linear function of the filter input
Design methods:
 The classical approach
frequency-selective filters such as
low pass / band pass / notch filters etc

 Optimal filter design


Mostly based on minimizing the mean-square value
of the error signal

2
Introduction
 Adaptive filters differ from other filters such as FIR and
IIR in the sense that:
 The coefficients are not determined by a set of desired
specifications.
 The coefficients are not fixed.

 With adaptive filters the specifications are not known


and change with time.
 Applications include: process control, medical
instrumentation, speech processing, echo and noise
calculation and channel equalization.
Wiener filter
 Work of Wiener in 1942 and Kolmogorov in 1939
 It is based on a priori statistical
information
 When such a priori information is
not available, which is usually the
case, it is not possible to design
a Wiener filter in the first
place.
Adaptive filter
 The signal and/or noise characteristics are often nonstationary
and the statistical parameters vary with time

 An adaptive filter has an adaptation algorithm, that is meant to


monitor the environment and vary the filter transfer function
accordingly.

 Based in the actual signals received, attempts to find the


optimum filter design
Adaptive filter
 The basic operation now involves two processes :

1. A filtering process, which produces an output signal in


response to a given input signal.

2. An adaptation process, which aims to adjust the filter


parameters (filter transfer function) to the (possibly time-
varying) environment
Often, the (average) square value of the error signal is
used as the optimization criterion
Adaptive filter
• Because of complexity of the optimizing algorithms
most adaptive filters are digital filters that perform
digital signal processing

 When processing
analog signals,
the adaptive filter
is then preceded
by A/D and D/A
convertors.
Adaptive filter
• The generalization to adaptive IIR filters leads to
stability problems

• It’s common to use


a FIR digital filter
with adjustable
coefficients
Applications of Adaptive Filters:
Identification
 Used to provide a linear model of an unknown
plant
 Applications:
 System identification
Applications of Adaptive Filters:
Inverse Modeling
 Used to provide an inverse model of an unknown plant
 Applications:
 Equalization (communications channels)
Applications of Adaptive Filters:
Prediction
 Used to provide a prediction of the present
value of a random signal
 Applications:
 Linear predictive coding
Applications of Adaptive Filters:
Interference Cancellation
 Used to cancel unknown interference from a primary signal
 Applications:
 Echo / Noise cancellation
hands-free carphone, aircraft headphones etc
Example:
Acoustic Echo Cancellation
LMS Adaptive Algorithm
• Introduced by Widrow & Hoff in 1959
• In the family of stochastic gradient algorithms
• Approximation of the steepest – descent method
• Based on the MMSE criterion.(Minimum Mean
square Error)
• Adaptive process containing two input signals:
1.) Filtering process, producing output signal.
2.) Desired signal (Training sequence)
• Adaptive process: recursive adjustment of filter tap
weights
LMS Algorithm
• Most popular adaptation algorithm is LMS
Define cost function as mean-squared error

• Move towards the minimum on the error surface to get to


minimum
gradient of the error surface estimated at every iteration

 update value   old value   learning -  tap  


       error 
 of tap - weigth   of tap - weight   rate  input  
 vector   vector   parameter vector signal
      
LMS Algorithm Steps
M 1

 Filter output yn   u n  k wk n


*

k 0

 Estimation error en  d n  yn

 Tap-weight adaptation wk n  1  wk n  un  ke n *

 update value   old value   learning -  tap  


       error 
 of tap - weigth   of tap - weight   rate  input  
 vector   vector   parameter vector signal
      
Stability of LMS
 The LMS algorithm is convergent in the mean square
if and only if the step-size parameter satisfy

 Here max is the largest eigenvalue of the correlation


matrix of the input data
 More practical test for stability is

 Larger values for step size


 Increases adaptation rate (faster adaptation)
 Increases residual mean-squared error
STEEPEST DESCENT EXAMPLE

• Given the following function we need to obtain the vector that would give us
the absolute minimum.

Y (c , c )  C  C
1 2 1
2 2
2 y

• It is obvious that C1  C2  0,
give us the minimum.
C1
(This figure is quadratic error function (quadratic bowl) )

C2

Now lets find the solution by the steepest descend method


STEEPEST DESCENT EXAMPLE

• We start by assuming (C1 = 5, C2 = 7)


• 

We select the constant . If it is too big, we miss the minimum. If it is too
small, it would take us a lot of time to hit the minimum. I would select =
0.05.
• The gradient vector is:
 dy 
 dc  2C 
y     1
1

 dy  2C2 
 dc 
 2
• So our iterative equation is:

C1  C1  C1  C1  C1 


C   C   0.05  y  C   0.1 C   0.9 C 
 2 [ n1]  2 [ n]  2 [ n ]  2 [ n ]  2 [ n ]
STEEPEST DESCENT EXAMPLE
C  5 
y
Iteration1 :  1    
C2  7 
C  4.5 Initial guess
Iteration 2 :  1    
C2  6.3
C1  0.405 
Iteration3 :     
C2  0.567  C1
Minimum
......
C  0.01 
Iteration 60 :  1    
C2  0.013
C  0 
lim n  1    
C2 [ n ] 0 C2
As we can see, the vector [c1,c2] converges to the value which would yield the
function minimum and the speed of this convergence depends on . 
LMS – CONVERGENCE GRAPH
Example for the Unknown Channel of 2nd order:

Desired Combination of taps

This graph illustrates the LMS algorithm. First we start from


guessing the TAP weights. Then we start going in opposite the
gradient vector, to calculate the next taps, and so on, until we get
the MMSE, meaning the MSE is 0 or a very close value to it.(In
practice we can not get exactly error of 0 because the noise is a
random process, we could only decrease the error below a desired
minimum)
LMS ALGORITHM FOR COEFFICIENT
ADJUSTMENT
Let {x(n)}denote the input sequence to the filter, and let the
corresponding output be{y(n)}

Suppose that we also have a desired sequence{d(n)}with which we can


compare the FIR filter output. Then we can form the error sequence
{e(n)}
LMS ALGORITHM FOR COEFFICIENT
ADJUSTMENT
The coefficients of the FIR filter will be selected to minimize the sum
of squared errors. Thus we have

The crosscorrelation between the desired


output sequence {d(n)}and the input
sequence{x(n)}
The autocorrelation sequence of{x(n)}
LMS ALGORITHM FOR COEFFICIENT
ADJUSTMENT
The sum of squared errors E is a quadratic function of the FIR filter
coefficients. Consequently, the minimization of E with respect to the
filter coefficients{h(k)}results in a set of linear equations.
By differentiating E with respect to each of the filter coefficients, we
obtain

This is the set of linear equations that yield the optimum filter
coefficients
LMS ALGORITHM FOR COEFFICIENT ADJUSTMENT

Where Δ is called the step size parameter, x(n−k) is the sample of the
input signal located at the kth tap of the filter at time n

The step size parameter Δ controls the rate of convergence of the


algorithm to the optimum solution.

To ensure stability, Δ must be chosen

Where N is the length of the adaptive FIR filter and Px is the power in the input
signal, which can be approximated by
MATLAB IMPLEMENTATION
SYSTEM IDENTIFICATION
SYSTEM IDENTIFICATION
SUPPRESSION OF NARROWBAND
INTERFERENCE IN A WIDEBAND SIGNAL
The signal x(n)is delayed by D samples, where the delay D is chosen sufficiently
large so that the wideband signal components w(n) and w(n−D), which are
contained in x(n) and x(n−D),respectively, are uncorrelated.
The output of the adaptive FIR filter is the estimate
SUPPRESSION OF NARROWBAND
INTERFERENCE IN A WIDEBAND SIGNAL
SMART ANTENNAS
Adaptive Array Antenna
 Adaptive Arrays

Linear Combiner

Interference
Audio Noise Reduction
 A popular application of acoustic noise reduction is
for headsets for pilots. This uses two microphones.

Block Diagram of a Noise Reduction Headset

Near Microphone d(n) = speech + noise Speech Output


+ e(n)

-
Filter Output
Far Microphone x(n) = noise'
y(n) (noise)
Adaptive Filter

e(n)
The Simulink Model
Setting the Step size (mu)

 The rate of
convergence of the
LMS Algorithm is
controlled by the
“Step size (mu)”.
 This is the critical
variable.
Trace of Input to Model

“Input” = Signal + Noise.


Trace of LMS Filter Output
“Output” starts at
zero and grows.
Trace of LMS Filter Error
“Error” contains
the noise.
Typical C6713 DSK Setup
USB to PC to +5V

Headphones Microphone

You might also like