Professional Documents
Culture Documents
Adaptive Filtering Part 1
Adaptive Filtering Part 1
computational simplicity!
_
h(0, n)
h(1, n)
h(2, n)
.
.
.
h(M 1, n)
_
_
X
M
(n) =
_
_
x(n)
x(n 1)
x(n 2)
.
.
.
x(n M + 1)
_
_
Dr. Deepa Kundur (University of Toronto) Adaptive Filtering: Part III 4 / 24
Chapter 13: Adaptive Filtering 13.3 Adaptive Direct-Form Filters RLS Algorithms
RLS Problem Formulation
Given observations: X
M
(l ), l = 0, 1, . . . , n
x(0) x(1) x(2) x(3) x(4) x(n 3) x(n 2) x(n 1) x(n)
Example: M = 3:
X
3
(0) = [ 0 0 x(0)]
t
X
3
(1) = [ 0 x(0) x(1)]
t
X
3
(2) = [x(0) x(1) x(2)]
t
X
3
(3) = [x(1) x(2) x(3)]
t
X
3
(4) = [x(2) x(3) x(4)]
t
.
.
.
X
3
(n 1) = [x(n 3) x(n 2) x(n 1)]
t
X
3
(n) = [x(n 2) x(n 1) x(n) ]
t
Dr. Deepa Kundur (University of Toronto) Adaptive Filtering: Part III 5 / 24
Chapter 13: Adaptive Filtering 13.3 Adaptive Direct-Form Filters RLS Algorithms
RLS Problem Formulation
Given observations: X
M
(l ), l = 0, 1, . . . , n
Given desired sequence: d(l ), l = 0, 1, . . . , n
Determine: h
M
(n) that minimizes
E
M
=
n
l =0
w
nl
|e
M
(l , n)|
2
where
e
M
(l , n) = d(l )
d(l , n)
= d(l ) h
t
M
(n) X
M
(l )
and 0 < w < 1 is a weighting factor.
Dr. Deepa Kundur (University of Toronto) Adaptive Filtering: Part III 6 / 24
Chapter 13: Adaptive Filtering 13.3 Adaptive Direct-Form Filters RLS Algorithms
Output of Adaptive FIR Filter
Consider l = 0, 1, . . . , n:
d(l , n) = h
t
M
(n) X
M
(l )
= [h(0, n) h(1, n) h(M 1, n)]
_
_
x(l )
x(l 1)
.
.
.
x(l M + 1)
_
_
=
M1
k=0
h(k, n)x(l k) = h(l , n) x(l )
Dr. Deepa Kundur (University of Toronto) Adaptive Filtering: Part III 7 / 24
Chapter 13: Adaptive Filtering 13.3 Adaptive Direct-Form Filters RLS Algorithms
Why are there two arguments for the error e
M
?
E
M
=
n
l =0
w
nl
|e
M
(l , n)|
2
There is an error associated with each (indexed with l ) input
X
M
(l ).
d(l , n) = d(l ) h
t
M
(n) X
M
(l )
Dr. Deepa Kundur (University of Toronto) Adaptive Filtering: Part III 8 / 24
Chapter 13: Adaptive Filtering 13.3 Adaptive Direct-Form Filters RLS Algorithms
Weighting Factor
E
M
=
n
l =0
w
nl
|e
M
(l , n)|
2
0 < w < 1
l = 0, 1, . . . , n = n l = n, n 1, . . . , 0
l = 0, 1, . . . , n = 0 < w
n
< w
n1
< . . . , < w
0
= 1
Objective: weight more recent data points more heavily to allow
lter coecients to adapt to time-varying statistical
characteristics of the data
Dr. Deepa Kundur (University of Toronto) Adaptive Filtering: Part III 9 / 24
Chapter 13: Adaptive Filtering 13.3 Adaptive Direct-Form Filters RLS Algorithms
Minimization of E
M
The RLS cost function is:
E
M
=
n
l =0
w
nl
|e
M
(l , n)|
2
Recall, MSE cost function:
E
M
= E[|e(n)|
2
]
minimization with respect to h
M
(n) gives the Wiener-Hopf Equations
. . .
Dr. Deepa Kundur (University of Toronto) Adaptive Filtering: Part III 10 / 24
Chapter 13: Adaptive Filtering 13.3 Adaptive Direct-Form Filters RLS Algorithms
Wiener-Hopf Equations
The Wiener-Hopf equations can be represented as:
M
h
M
=
d
where
h
M
denotes the vector of adaptive lter coecients
d
is an M 1 crosscorrelation vector
M
is an M M Hermitian autocorrelation matrix
Dr. Deepa Kundur (University of Toronto) Adaptive Filtering: Part III 11 / 24
Chapter 13: Adaptive Filtering 13.3 Adaptive Direct-Form Filters RLS Algorithms
Similarly, minimization of
E
M
=
n
l =0
w
nl
|e
M
(l , n)|
2
with respect to h
M
(n) yields the solution
R
M
(n)h
M
(n) = D
M
(n)
where
R
M
(n) is the signal estimated correlation matrix:
R
M
(n) =
n
l =0
w
nl
X
M
(l )X
t
M
(l )
D
M
(n) is the estimated crosscorrelation vector:
D
M
(n) =
n
l =0
w
nl
X
M
(l )d(l )
Dr. Deepa Kundur (University of Toronto) Adaptive Filtering: Part III 12 / 24
Chapter 13: Adaptive Filtering 13.3 Adaptive Direct-Form Filters RLS Algorithms
The solution is given by
h
M
(n) = R
1
M
(n)D
M
(n)
Note: R
M
(n) is not Toeplitz as in the LMS case for
M
.
Recall, for LMS
M
is a Hermitian autocorrelation matrix.
5
=
_
xx
(0)
xx
(1)
xx
(2)
xx
(3)
xx
(4)
xx
(1)
xx
(0)
xx
(1)
xx
(2)
xx
(3)
xx
(2)
xx
(1)
xx
(0)
xx
(1)
xx
(2)
xx
(3)
xx
(2)
xx
(1)
xx
(0)
xx
(1)
xx
(4)
xx
(3)
xx
(2)
xx
(1)
xx
(0)
_
_
=
_
xx
(0)
xx
(1)
xx
(2)
xx
(3)
xx
(4)
xx
(1)
xx
(0)
xx
(1)
xx
(2)
xx
(3)
xx
(2)
xx
(1)
xx
(0)
xx
(1)
xx
(2)
xx
(3)
xx
(2)
xx
(1)
xx
(0)
xx
(1)
xx
(4)
xx
(3)
xx
(2)
xx
(1)
xx
(0)
_
_
Note:
t
5
=
5
and more generally
t
M
=
M
because
xx
(n) =
xx
(n)
Dr. Deepa Kundur (University of Toronto) Adaptive Filtering: Part III 13 / 24
Chapter 13: Adaptive Filtering 13.3 Adaptive Direct-Form Filters RLS Algorithms
The solution is given by
h
M
(n) = R
1
M
(n)D
M
(n)
Note: R
M
(n) is not Toeplitz as in the LMS case for
M
.
The signal estimated correlation matrix is:
R
M
(n) =
n
l =0
w
nl
X
M
(l )X
t
M
(l )
. .
NOT Toeplitz mx
X
3
(l )X
t
3
(l ) =
|x(l )|
2
x
(l )x(l 1) x
(l )x(l 2)
x
(l 1)x(l 2)
x
(l 2)x(l ) x
NOT Toeplitz mx
Dr. Deepa Kundur (University of Toronto) Adaptive Filtering: Part III 14 / 24
Chapter 13: Adaptive Filtering 13.3 Adaptive Direct-Form Filters RLS Algorithms
Computation of R
1
M
(n)
Adapting the lter coecients at each time step n:
h
M
(n) = R
1
M
(n)D
M
(n)
requires an M M matrix inversion.
Computing R
1
M
(n) from scratch each time is impactical.
Consider the Matrix Inversion Lemma (Householder, 1964):
R
1
M
(n) =
1
w
_
R
1
M
(n 1)
R
1
M
(n 1)X
M
(n)X
t
M
(n)R
1
M
(n 1)
w +X
t
M
(n)R
1
M
(n 1)X
M
(n)
_
Dr. Deepa Kundur (University of Toronto) Adaptive Filtering: Part III 15 / 24
Chapter 13: Adaptive Filtering 13.3 Adaptive Direct-Form Filters RLS Algorithms
Matrix Inversion Lemma Rewritten
Let
P
M
(n) = R
1
M
(n)
K
M
(n) =
1
w +
M
(n)
P
M
(n 1)X
M
(n)
M
(n) = X
t
M
(n)P
M
(n 1)X
M
(n)
R
1
M
(n) =
1
w
_
R
1
M
(n 1)
R
1
M
(n 1)X
M
(n)X
t
M
(n)R
1
M
(n 1)
w +X
t
M
(n)R
1
M
(n 1)X
M
(n)
_
becomes
P
M
(n) =
1
w
_
P
M
(n 1) K
M
(n)X
t
M
(n)P
M
(n 1)
P
M
(n)X
M
(n) =
1
w
_
_P
M
(n 1)X
M
(n)
. .
=[w+M(n)]KM(n)
K
M
(n) X
t
M
(n)P
M
(n 1)X
M
(n)
. .
=M(n)
_
_
=
1
w
[wK
M
(n) +
M
(n)K
M
(n) K
M
(n)
M
(n)]
= K
M
(n)
where K
M
(n) = P
M
(n)X
M
(n) is called the Kalman gain vector.
Dr. Deepa Kundur (University of Toronto) Adaptive Filtering: Part III 17 / 24
Chapter 13: Adaptive Filtering 13.3 Adaptive Direct-Form Filters RLS Algorithms
Estimated Crosscorrelation Vector
Recall h
M
(n) = R
1
M
(n)D
M
(n) = P
M
(n)D
M
(n)
D
M
(n) =
n
l =0
w
nl
X
M
(l )d(l ) =
n1
l =0
w
nl
X
M
(l )d(l ) +X
M
(n)d(n)
=
n1
l =0
w w
n1l
X
M
(l )d(l ) +X
M
(n)d(n)
= w
(n1)
l =0
w
(n1)l
X
M
(l )d(l )
. .
=DM(n1)
+X
M
(n)d(n)
D
M
(n) = w D
M
(n 1) + d(n)X
M
(n)
Note: recursive relationship
Dr. Deepa Kundur (University of Toronto) Adaptive Filtering: Part III 18 / 24
Chapter 13: Adaptive Filtering 13.3 Adaptive Direct-Form Filters RLS Algorithms
Goal: Obtain current lter coecients in terms of previous
h
M
(n) = R
1
M
(n)D
M
(n)
= P
M
(n)D
M
(n)
=
1
w
[P
M
(n 1) K
M
(n)X
t
M
(n)P
M
(n 1)]
[w D
M
(n 1) + d(n)X
M
(n)]
= P
M
(n 1)D
M
(n 1)
. .
=hM(n1)
+
1
w
d(n) P
M
(n 1)X
M
(n)
. .
=(w+M(n))KM(n)
K
M
(n)X
t
M
(n) P
M
(n 1)D
M
(n 1)
. .
=hM(n1)
1
w
d(n)K
M
(n) X
t
M
(n)P
M
(n 1)X
M
(n)
. .
=M(n)
= h
M
(n 1) +K
M
(n)
_
d(n) X
t
M
(n)h
M
(n 1)
Recall,
d(l , n) = h
t
M
(n) X
M
(l ) = X
t
M
(l ) h
M
(n)
d(n, n 1) = X
t
M
(n) h
M
(n 1)
Therefore,
h
M
(n) = h
M
(n 1) +K
M
(n)
_
d(n)
d(n, n 1)
_
= h
M
(n 1) +K
M
(n) e
M
(n, n 1)
. .
eM(n)
= h
M
(n 1) +K
M
(n) e
M
(n)
Therefore,
h
M
(n) = h
M
(n 1) +K
M
(n) e
M
(n)
Dr. Deepa Kundur (University of Toronto) Adaptive Filtering: Part III 20 / 24
Chapter 13: Adaptive Filtering 13.3 Adaptive Direct-Form Filters RLS Algorithms
Direct-Form Recursive Least Squares Algorithm
Set h
M
(1) = 0
Set P
M
(1) = (1/)I
M
, for 1
Note: X
M
(1) = 0
For n = 0, 1, 2, . . .
Given: h
M
(n 1), P
M
(n 1), X
M
(n 1)
New observation: x(n)
0. Form Input Vector:
Form X
M
(n) from X
M
(n 1) as follows:
X
M
(n 1) = [x(n M) x(n M + 1) x(n M + 2) x(n 2) x(n 1)]
= [x(n M)
DELETE
x(n M + 1) x(n M + 2) x(n 2) x(n 1)] [x(n)]
INSERT
X
M
(n) = [x(n M + 1) x(n M + 2) x(n 2) x(n 1) x(n)]
Dr. Deepa Kundur (University of Toronto) Adaptive Filtering: Part III 21 / 24
Chapter 13: Adaptive Filtering 13.3 Adaptive Direct-Form Filters RLS Algorithms
Direct-Form Recursive Least Squares Algorithm
ASIDE:
X
3
(1) = [ 0 0 0 ]
t
= 0
X
3
(0) = [ 0 0 x(0)]
t
X
3
(1) = [ 0 x(0) x(1)]
t
X
3
(2) = [x(0) x(1) x(2)]
t
X
3
(3) = [x(1) x(2) x(3)]
t
X
3
(4) = [x(2) x(3) x(4)]
t
.
.
.
X
3
(n 1) = [x(n 3) x(n 2) x(n 1)]
t
X
3
(n) = [x(n 2) x(n 1) x(n) ]
t
Dr. Deepa Kundur (University of Toronto) Adaptive Filtering: Part III 22 / 24
Chapter 13: Adaptive Filtering 13.3 Adaptive Direct-Form Filters RLS Algorithms
h
M
(n) = h
M
(n 1) + K
M
(n) e
M
(n)
1. Compute the lter output:
d(n) = X
t
M
(n)h
M
(n 1)
2. Compute the error:
e
M
(n) = d(n)
d(n)
3. Compute the Kalman gain vector:
K
M
(n) =
P
M
(n 1)X
M
(n)
w +X
t
M
(n)P
M
(n 1)X
M
(n)
4. Update the inverse of the correlation matrix:
P
M
(n) =
1
w
_
P
M
(n 1) K
M
(n)X
t
M
(n)P
M
(n 1)
M
(n)
RLS:
h
M
(n) = h
M
(n 1) +K
M
(n) e
M
(n)
Adjustment rate is controlled by a single-parameter for the
LMS algorithm and the Kalman gain vector for K
M
(n) for the
RLS algorithm.
RLS has rapid rate of convergence compared to LMS.
LMS is computationally simpler than RLS.