Professional Documents
Culture Documents
LINEAR FILTERS:
WIENER FILTERS
Ch 02
1
Introduction
2
3
4
Introduction
Linear Optimum discrete‐time filters are known as Weiner Filters.
Weiner filter theory will be formulated for general case of complex valued
stochastic process with filter specified in terms of impulse response.
In many practical situation/areas complex valued time series are observed
e.g., communication, radar, sonar etc.
Real valued time series problem can be takes as special case
6
Linear Optimum DT Filtering: Statement (cont.)
• Given: Complex‐valued stationary (at least w.s.s.) stochastic
processes 𝑑 𝑛
• 𝑑 𝑛 will be as desired response
• Linear discrete‐time filter is characterized impulse response
𝑤 , 𝑤 , ⋯ ⋯(IIR or FIR (inherently stable)):
𝑦 𝑛 𝑤∗𝑢 𝑛 𝑘 ;𝑛 1, 2, ⋯ ,
7
Linear Optimum DT Filtering: Statement (cont.)
• 𝑦 𝑛 is the estimate of the desired response 𝑑 𝑛
minimize the mean square error also called cost function is defined as mean‐
square error
𝐽 𝐸 𝑒 𝑛 𝑒∗ 𝑛 𝐸 𝑒 𝑛
where the error signal is 𝑒 𝑛 𝑑 𝑛 ∑ 𝑤∗𝑢 𝑛 𝑘
8
Linear Optimum DT Filtering: Statement (cont.)
• Problem statement:
• Given
• Filter input, 𝑢 𝑛 ,
• Desired response, 𝑑 𝑛 ,
• Find the optimum filter coefficients, 𝑤 𝑛
• To make the estimation error “as small as possible”
• How?
• An optimization problem.
• Approach Statistical Approach 9
Linear Optimum Filtering: Statement
• Optimizing the filter design by minimizing a cost function or index performance derived
from following possibilities:criterion:
1. Expectation of the absolute value, 𝐸 |𝑒 𝑛 |
2. Expectation (mean) square value, 𝐸 𝑒 𝑛
3. Expectation of higher powers of the absolute value of the estimation error:
𝐸 𝑒 𝑛
• Minimization of the Mean Square value of the Error (MSE) is mathematically tractable .
10
Linear Optimum DT Filtering: Statement
• Problem becomes:
• Design a linear discrete‐time filter whose output 𝑦 𝑛 provides an estimate of a desired
square value of the estimation error 𝑒 𝑛 , defined as the difference between the
desired response d(n) and the actual response, is minimized.
11
Principle of Orthogonality
• Filter output is the convolution of the filter impulse response and the
input
12
Principle of Orthogonality
• Error:
• MSE (Mean‐Square Error) criterion:
• (Gradient w.r.t. optimization variable 𝑤 is zero.)
13
Derivative in complex variables
• Let
then derivation w.r.t. 𝑤𝑘 is
• Hence
or
14
Principle of Orthogonality
• Partial derivative of J gives multidimensional complex gradient 𝜵𝐽
• kth element:
• Using and
• Hence
15
10/28/2021 16
Principle of Orthogonality
• Since ∇ 𝐽 0 or
• The necessary and sufficient condition for the cost function J to attain its minimum
each input sample that enters into the estimation of the desired response at time n.
• Error at the minimum is uncorrelated with the filter input!
• A good basis for testing whether the linear filter is operating in its optimum condition.
17
Principle of Orthogonality
• Corollary:
If the filter is operating in optimum conditions (in the MSE sense)
• When the filter operates in its optimum condition, the estimate of the desired
response defined by the filter output yo(n) and the corresponding estimation error
eo(n) are orthogonal to each other.
18
Minimum Mean‐Square Error
• Let the estimate of the desired response that is optimized in
the MSE sense, depending on the inputs which span the space
i.e. ( ) be
• Then the error in optimal conditions is
or
• Also let the minimum MSE be (≠0)
Assignment: Derive
𝜎 𝜎 𝐽
• Using above equation variance of 𝑑 𝑛
𝜎 𝜎 𝐽
19
Minimum Mean‐Square Error
• Normalized minimum/optimum MSE:
𝐽 𝜎
⇒𝜖 1
𝜎 𝜎
• Hence 0 𝜖 1
• If 𝜖 0, the optimum filter operates perfectly, in the sense that there is complete
agreement between 𝑑 𝑛 and 𝑑 𝑛 𝒰 . (Optimum case)
𝐸 𝑢 𝑛 𝑘 𝑑∗ 𝑛 𝑘 𝑤 ∗ 𝑢∗ 𝑛 𝑖 0
𝑤∗ 𝐸 𝑢 𝑛 𝑘 𝑢∗ 𝑛 𝑖 𝐸 𝑢 𝑛 𝑑∗ 𝑛
𝑤∗ 𝑟 𝑖 𝑘 𝑝 𝑘 ; 𝑘 0, 1, 2, …
where 𝑟 𝑘 𝐸 𝑢 𝑛 𝑢∗ 𝑛 𝑘 is autocorrelation of 𝑢 𝑛
𝑝 𝑘 𝐸 𝑢 𝑛 𝑑∗ 𝑛 𝑘 is cross‐correlation of 𝑢 𝑛 and 𝑑 𝑛
•∑ 𝑤 𝑟 𝑖 𝑘 𝑝 𝑘 ;𝑘 0, 1, 2, … are called Wiener‐Hopf Equations
Wiener‐Hopf Equations:Solution – Linear Transversal (FIR) Filter case
• Linear Transversal or FIR filter:
• Wiener‐Hopf Equations becomes:
∑ 𝑤∗ 𝑟 𝑖 𝑘 𝑝 𝑘 ; 𝑘 0, 1, 2, … 𝑀 1
where 𝑤 , 𝑤 , ⋯ , 𝑤 are optimum values of the tap weights
the filter
22
Wiener‐Hopf Equations:Solution – Linear Transversal (FIR) Filter case
For 𝑘 0: 𝑤 𝑟 0 𝑤 𝑟 1 ⋯ 𝑤 𝑟 𝑀 1 𝑝 0
For 𝑘 1: 𝑤 𝑟 1 𝑤 𝑟 0 ⋯ 𝑤 𝑟 𝑀 2 𝑝 1
⋮ ⋮ ⋮
For 𝑘 𝑀 1:
𝑤 𝑟 𝑀 1 𝑤 𝑟 𝑀 2 ⋯ 𝑤 𝑟 0 𝑝 𝑀 1
Wiener‐Hopf Equations:Solution – Linear Transversal (FIR) Filter case
𝑟 0 𝑟 1 … 𝑟 𝑀 1 𝑤 𝑝 0
𝑟 1 𝑟 0 … 𝑟 𝑀 2 𝑤 𝑝 1
⋮ ⋮ ⋮ ⋮ ⋮
𝑟 𝑀 1 𝑟 𝑀 2 𝑟 0 𝑤 𝑝 𝑚 1
𝑅𝑊 𝑝⃗
Wiener‐Hopf Equations:Solution – Linear Transversal (FIR) Filter case
Let 𝑾 𝑤 ∗ , 𝑤 ∗ , …, 𝑤 ∗
𝑟 0 𝑟 1 … 𝑟 𝑀 1
𝑟 1 𝑟 0 … 𝑟 𝑀 2
𝑹 𝑬𝒖 𝑛 𝒖 𝑛
⋮ ⋮ ⋮
𝑟 𝑀 1 𝑟 𝑀 2 𝑟 0
𝑷 𝐸 𝒖 𝑛 𝑑∗ 𝑛 𝑝 0 ,𝑝 1 ,…,𝑝 𝑀 1
Wiener‐Hopf Equations (Matrix Form)
• Then the Wiener‐Hopf equations can be written as 𝑅𝑊 𝑝⃗
where
𝑤 𝑤 𝑤 ⋯ 𝑤
is composed of the optimum (FIR) filter coefficients.
The solution is found to be
• Note that 𝑹 is almost always positive‐definite.
26
Error‐Performance Surface
• Substitute →
• Rewriting
27
10/28/2021 28
10/28/2021 29
Error‐Performance Surface
• When the tap inputs of the FIR filter and the desired response are jointly stationary, the cost
function, or mean‐square error, 𝐽 is precisely a second‐order function of the tap weights in the filter.
• This surface is characterized by a unique minimum. For obvious reasons, we refer to the surface so
described as the error‐performance surface of the FIR filter.
30
Error‐Performance Surface
31
Error‐Performance Surface
• At the bottom, or minimum point, of the error‐performance surface, the cost function 𝐽
attains its minimum value , denoted by 𝐽 .
• At this point,
or
Wiener‐Hopf Equations
32
Minimum value of Mean‐Square Error
• We calculated that
• The estimate of the desired response is
Hence its variance is
At wo.
Then (Jmin is independent of w)
33
Canonical Form of the Error‐Performance Surface
• Rewrite the cost function in matrix form
• Next, express J(w) as a perfect square in w
• Then, by substituting
• In other words,
34
Canonical Form of the Error‐Performance Surface
• Observations:
• J(w) is quadratic in w,
• Minimum is attained at w=wo,
• Jmin is bounded below, and is always a positive quantity,
• Jmin>0 →
35
Canonical Form of the Error‐Performance Surface
• Transformations may significantly simplify the analysis,
• Use eigendecomposition for R
• Then a vector
• Let Canonical form
• Substituting back into J
• The transformed vector v is called as the principal axes of the surface.
36
Canonical Form of the Error‐Performance Surface
wo J(w)=c curve v2 J(v)=c curve
w2 J(wo)=Jmin (λ2)
Jmin
Q
v1
Transformation
(λ1)
w1
37