Professional Documents
Culture Documents
Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng., Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng.,
8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 3 8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 4
Multiple Linear Regression Model Principle of Orthogonality
M −1
• The error signal: e (i ) = d (i ) − ∑ w k u (i − k ) = d (i ) − w u(i ).
∗ H
• Assume an unknown
k=0
underlying model to be
estimated with u(i) and • The cost function (the sum of error squares):
N N
d(i) known. E (w 0 ,w1 ,…,wM −1 ) = ∑ e (i ) = ∑e (i )e ∗ (i ).
2
• Estimation error is i =M i =M
e(i)=d(i)–y(i), where Principle of orthogonality with time average:
N
∑u (i − k )emin(i ) = 0, ∀k = 0,1,…,M − 1.
M −1 ∗
y (i ) = ∑ w k u (i − k ).
∗
i =M
k=0 M −1
Error: e (i ) = d (i ) − ∑ w k u (i − k ). ∗
The filter output provides the linear LS estimate of
k=0
Sum of error squares: the reponse d(i).
i2
E (w 0 ,w1 ,…,wM −1 ) = ∑ e (i ) .
2
i =i1
Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng., Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng.,
8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 5 8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 6
[
K(n , n − 1) = E ε(n , n − 1)εΗ (n , n − 1) , ] ⋮
correction term
and the predicted state-error: Recursive estimate update:
ˆ (n Yn −1 ).
ε(n ) = x(n ) − x ˆ (n + 1 Yn ) = F(n + 1, n )x
x ˆ (n Yn −1 ) + G(n )α(n ).
Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng., Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng.,
8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 11 8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 12
Recursive One-Step Predictor Kalman Gain
ˆ (n + 1 Yn ) = F(n + 1, n )x
x ˆ (n Yn −1 ) + G(n )α(n ). • The Kalman gain matrix G(n ) = E x(n + 1)α (n ) R (n )
Η −1
[ ]
can also be computed recursively:
Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng., Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng.,
8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 13 8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 14
⋅× Q1 (n )
Initial condition: K(1,0)
solver
Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng., Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng.,
8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 17 8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 18
Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng., Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng.,
8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 19 8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 20
Extended Kalman Filter Introduction to RLS Algorithm
• Sometimes the basic system model is nonlinear. • The next step is to apply the method of least
• The Kalman filter can be extended to such a case squares to update the tap-weights of adaptive
transversal filters.
as well: • We search for a recursive least squares (RLS)
1. Linearize the problem approximately by Taylor series. algorithm to update the filter tap-weights when new
2. Approximate the state equations: observations (data, input samples) are fed into the
x (n + 1) ≈ F(n + 1, n )x (n ) + v1 (n ) + d(n ), filter.
y (n ) ≈ C(n )x (n ) + v 2 (n ). deterministic (non-random) More efficient utilization of data than in the LMS
algorithm.
Kalman filtering still applies except a few Improved convergence.
modifications. Increased complexity.
• Close relationship to Kalman filtering, but RLS
algorithm is treated as on its own.
Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng., Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng.,
8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 21 8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 22
• The cost function to be minimized at time n is • The weighting factor must satisfy 0<β(n,i)≤1.
n
E (n ) = ∑β(n , i )e (i ) ,
2 – To forget to cope with statistical variations (nonstationarity).
i =1 Exponential weighting factor: β(n,i) = λn–i.
n
where β(n,i) is a weighting or forgetting factor, and Exponentially weighted least squares: E (n ) = ∑ λn −i e (i ) ,
2
• Block processing: taps are fixed over 1≤i≤n. z(n ) = λz(n − 1) + u(n )d ∗ (n ).
Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng., Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng.,
8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 23 8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 24
The Impact of the Value of λ Matrix Inversion Lemma
• General form: [S. M. Kay, Fundamentals of Stat. Sign. Proc., Prentice Hall, 1993, p. 571]
λ = 0.999 (A + BCD ) −1
(
= A −1 − A −1B DA −1B + C −1 DA −1 . )
• Textbook’s [1] special case:
A = B −1 + CD −1CH ⇒ A −1 = B − BC D + CHBC ( )−1
CHB,
Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng., Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng.,
8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 25 8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 26
[
ˆ (n − 1) + k (n ) d ∗ (n ) − uH (n )w
=w ˆ (n − 1) ]
ˆ (n − 1) + k (n )ξ∗ (n ),
=w
where the a priori estimation error ≠ a posteriori
estimation error
ξ(n ) = d (n ) − w ˆ H (n − 1)u(n ) ≠ e (n ) = d (n ) − w
ˆ H (n )u(n ).
© M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng., © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng.,
Communication Signal Processing I Communication Signal Processing I
8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 29 8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 30
i =1
ξ(n ) = d (n ) − w
ˆ H (n − 1)u(n )
[
⇒ Emin = λEd (n − 1) + d (n ) − λzH(n − 1) + uH(n )d ∗ (n ) w
2
ˆ (n − 1) + k(n )ξ∗ (n ) ][ ]
ˆ (n ) = w
w ˆ (n − 1) + k (n )ξ∗ (n ) [
= λ Ed (n − 1) − zH(n − 1)w ]
ˆ (n − 1) + d (n ) d ∗ (n ) − uH(n )w [
ˆ (n − 1) ]
− z (n )k(n )ξ (n )
H ∗
Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng., Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng.,
8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 31 8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 32
Convergence Analysis Mean Value
( )
algorithm) is not used. n n ∗ n n
z(n ) = ∑ u(i )d ∗ (i ) = ∑ u(i ) e o (n ) + w Hou(n ) = ∑ u(i )uH (n )w o + ∑ u(i )e o∗ (n )
• Multiple linear regression i =1 i =1 i =1 i =1
n
model is applied. = Φ(n )w o + ∑ u(i ) e o∗ (n )
i =1 n n
– Regression parameter: wo.
ˆ (n ) = Φ
⇒w −1
(n )z(n ) = w o + Φ −1 (n ) ∑ u(i )e o∗ (n ) = w o + Φ −1 (n ) ∑ u(i )e o∗ (n )
– Measurement error: eo(n). i =1 i =1
• Analysis carried out for – The claim follows from the above by noting that the
expectation of the latter term is zero.
λ=1 or β(n,i)=λn–i=1.
d (n ) = e o (n ) + wHou(n )
Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng., Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng.,
8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 33 8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 34
• Two independence assumptions: The input vectors • Two kinds of filter output MSE measures:
u(1),u(2),...,u(n) are IID and jointly Gaussian. – a priori estimation error ξ(n)
The covariance matrix K(n) = E[εε(n)εεH(n)] of the • Large value (MSE of d(1)) at time n=1, then decays.
filter tap-weight error vector ε (n ) = w ˆ (n ) − w o . – a posteriori estimation error e(n)
• Small value at time n=1, then rises.
is
[ ]
K (n ) = σ2 E Φ −1 (n ) = n −M1 −1 R −1 , n > M + 1. A priori estimation error ξ(n) is more descriptive:
[ ]
⇒ E ε H (n )ε (n ) = tr[K (n )] = σ2
n − M −1
M
∑ λ1i , n > M + 1.
i =1
[
J ' (n ) = E ξ(n )
2
] =σ 2
+ tr[RK (n − 1)] = σ2 + n −MMσ −1 , n > M + 1.
2
– Proof: See [1, pp. 576–578]. – Proof: See [1, pp. 578–579].
Consequences:
1. MSE is magnified by 1/λmin. Ill-conditioned matrices
cause problems.
2. MSE decreases linearly over time.
Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng., Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng.,
8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 35 8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 36
Learning Curve ─ The Output MSE: Application Example ─ Equalization
Consequences
1. The learning curve converges in about 2M iterations • Transmitted signal: random
about an order of magnitude faster than the LMS sequence of ±1’s.
algorithm. • Channel:
2. As the number of iterations approaches infinity MSE 1 2π
hn = 2 1 + cosW (n − 2) , n = 1, 2, 3.
approaches the variance σ2 of the optimum
measurement error eo(n) zero excess MSE in WSS 0, otherwise
environments. – 11-tap FIR equalizer.
3. MSE convergence is independent of the eigenvalue • Two SNR values:
spread of the input data correlation matrix. – SNR = 30 dB
Remarkable convergence improvements over LMS – SNR = 10 dB.
Channel response
algorithm with the price of increased complexity.
Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng., Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng.,
8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 37 8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 38
Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng., Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng.,
8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 39 8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 40
Relations of RLS Algorithms and Kalman
Relation to Kalman Filter Filter Variables
• The RLS algorithm has many similarities to the
Kalman filtering, but also some differences.
– RLS: derivation by a deterministic mathematical model.
– Kalman: derivation by a stochastic mathematical model.
Unified approach based on stochastic state-space
models.
• The Kalman filtering approaches in the literature are
readily available for RLS algorithms.
Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng., Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng.,
8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 41 8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 42
Summary
• RLS algorithm derived as a natural application of
the method of least squares to the linear filter
adaptation problem.
– Based on matrix inversion lemma.
• Difference to the LMS algorithm: step-size
parameter µ is replaced by P(n) = Φ–1(n).
The rate of convergence of the RLS alg. is typically
1. an order of magnitude better than that of the LMS alg.
2. invariant to eigenvalue spread.
3. the excess MSE converges to zero
• The case λ≠1 is considered later change in the last
property.
Communication Signal Processing I © M. Juntti, University of Oulu, Dept. Electrical and Inform. Eng.,
8. Recursive Least Squares Algorithm Telecomm. Laboratory & CWC 43